论文标题
修饰:在街道视图中,将街头触地得分添加到街头学习作为语言接地任务的可共享资源
Retouchdown: Adding Touchdown to StreetLearn as a Shareable Resource for Language Grounding Tasks in Street View
论文作者
论文摘要
达阵数据集(Chen等,2019)提供了人类注释者的说明,以通过纽约市街道导航,并在给定位置解决空间描述。为了使更广泛的研究社区能够有效地处理达阵任务,我们将公开发布达阵所需的29k Raw Street View全景。我们遵循用于街道数据发布的过程(Mirowski等,2019),以检查全景信息以获取个人身份信息,并根据需要模糊它们。这些已添加到StreetLearn数据集中,可以通过与以前用于StreetLearn相同的过程获得。我们还为达阵任务提供了参考实现:视觉和语言导航(VLN)和空间描述分辨率(SDR)。我们将我们的模型结果与Chen等人的模型结果进行了比较。 (2019年),并表明我们为StreetLearn添加的全景完全支持了这两个触地得分任务,并且可以有效地用于进一步的研究和比较。
The Touchdown dataset (Chen et al., 2019) provides instructions by human annotators for navigation through New York City streets and for resolving spatial descriptions at a given location. To enable the wider research community to work effectively with the Touchdown tasks, we are publicly releasing the 29k raw Street View panoramas needed for Touchdown. We follow the process used for the StreetLearn data release (Mirowski et al., 2019) to check panoramas for personally identifiable information and blur them as necessary. These have been added to the StreetLearn dataset and can be obtained via the same process as used previously for StreetLearn. We also provide a reference implementation for both of the Touchdown tasks: vision and language navigation (VLN) and spatial description resolution (SDR). We compare our model results to those given in Chen et al. (2019) and show that the panoramas we have added to StreetLearn fully support both Touchdown tasks and can be used effectively for further research and comparison.