In the last years, I was working on the projects ARIDuA (Autonomous Robots and the Internet of Things in Underground Facilities) and HoloMine. My current research focuses on underground robotics, computer vision, depth data, and localization.
In recent years, depth sensors have become more and more affordable and have found their way into a growing amount of robotic systems. However, mono- or multi-modal sensor registration, often a necessary step for further pro-cessing, faces many challenges on raw depth images or point clouds. This paper presents a method of converting depth data into images capable of visualizing spatial details that are basically hidden in traditional depth images. After noise removal, a neighborhood of points forms two normal vectors whose difference is encoded into this new conversion. Compared to Bearing Angle images, our method yields brighter, higher-contrast images with more visible contours and more details. We tested feature-based pose estimation of both conversions in a visual odometry task and RGB-D SLAM. For all tested features, AKAZE, ORB, SIFT, and SURF, our new Flexion images yield better results than Bearing Angle images and show great potential to bridge the gap between depth data and classical computer vision. Source code is available here: https://rlsch.github.io/depth-flexion-conversion.
Design of an Autonomous Robot for Mapping, Navigation, and Manipulation in Underground Mines
Robert Lösch, Steve Grehl, Marc Donner, and 2 more authors
In 2018 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), Oct 2018
Underground mines are a dangerous working environment and, therefore, robots could help putting less humans at risk. Traditional robots, sensors, and software often do not work reliably underground due to the harsh environment. This paper analyzes requirements and presents a robot design capable of navigating autonomously underground and manipulating objects with a robotic arm. The robot’s base is a robust four wheeled platform powered by electric motors and able to withstand the harsh environment. It is equipped with color and depth cameras, lighting, laser scanners, an inertial measurement unit, and a robotic arm. We conducted two experiments testing mapping and autonomous navigation. Mapping a 75 meters long route including a loop closure results in a map that qualitatively matches the original map to a good extent. Testing autonomous driving on a previously created map of a second, straight, 150 meters long route was also successful. However, without loop closure, rotation errors cause apparent deviations in the created map. These first experiments showed the robot’s operability underground.