论文标题

稀疏漫游者群的低视点森林深度数据集

Low-viewpoint forest depth dataset for sparse rover swarms

论文作者

Niu, Chaoyue, Tarapore, Danesh, Zauner, Klaus-Peter

论文摘要

嵌入式计算硬件的快速进展越来越多地可以在小型机器人上进行车载图像处理。这一开发为用复杂的计算机视觉技术替换昂贵的传感器开辟了道路。一个很好的例子是从单眼相机中预测场景深度信息以进行自动导航。以旨在开发适合在森林中传感,监视和搜索应用程序的机器人群的目的的动机,我们收集了一组RGB图像和相应的深度图。从一个小的地面流动站的角度,用自定义的钻机记录了超过100k的图像。在不同的天气和照明条件下拍摄的图像包括带草,灌木丛,站立和倒下的树木,树枝,叶子和污垢的场景。另外,还记录了GPS,IMU和车轮编码器数据。从校准,同步,排列和时间戳框架中,选择了约9700张图像深度映射对,以进行清晰度和多样性。我们将此数据集提供给社区,以满足我们自己研究中确定的需求,并希望它可以加速机器人的进度,从而导致具有挑战性的森林环境。本文介绍了我们的自定义硬件和方法,以收集数据,随后的数据处理和质量以及如何访问数据。

Rapid progress in embedded computing hardware increasingly enables on-board image processing on small robots. This development opens the path to replacing costly sensors with sophisticated computer vision techniques. A case in point is the prediction of scene depth information from a monocular camera for autonomous navigation. Motivated by the aim to develop a robot swarm suitable for sensing, monitoring, and search applications in forests, we have collected a set of RGB images and corresponding depth maps. Over 100k images were recorded with a custom rig from the perspective of a small ground rover moving through a forest. Taken under different weather and lighting conditions, the images include scenes with grass, bushes, standing and fallen trees, tree branches, leafs, and dirt. In addition GPS, IMU, and wheel encoder data was recorded. From the calibrated, synchronized, aligned and timestamped frames about 9700 image-depth map pairs were selected for sharpness and variety. We provide this dataset to the community to fill a need identified in our own research and hope it will accelerate progress in robots navigating the challenging forest environment. This paper describes our custom hardware and methodology to collect the data, subsequent processing and quality of the data, and how to access it.

扫码加入交流群

加入微信交流群

微信交流群二维码

扫码加入学术交流群,获取更多资源