SLAM and Autonomous Navigation
Updated: May 6
When launching an AUV (Autonomous Underwater Vehicle) - like that of UFRJ Nautilus - in
unknown waters, it is necessary that it could be able to locate itself while preparing the map of the place. For this to be possible, algorithms such as SLAM (Simultaneous Localization and Mapping) are needed, especially when integrated with ROS (Robotics Operating System). In this way, with a good autonomous navigation, the range of possibilities for the tasks that the robot is capable of carrying out expands further.
The acronym SLAM stands for Simultaneous Localization And Mapping, this means that the robot can perceive its location in relation to a reference point while also creating a map, mainly to perceive obstacles in the environment. This technique allows, once equipped with certain types of sensors, the robot to adapt its trajectory in any environment without the need for external commands during the performance of its activities. But how does such a technique work?
SLAM algorithms work in three layers. At its core, SLAM algorithms work on the basis of position estimation and sensor fusion systems. Among them we can highlight Kalman Filters (for linear systems), Extended Kalman Filters, Unscented Kalman Filters and Particle Filters (the three for non-linear systems but each with its specific restrictions that will be dealt with in other posts). Another layer of SLAM is the frames (proper coordinate systems) of each sensor. In the outermost layer, we have the unchanged data provided by the sensors. As each sensor provides data related to its frame, it is necessary to convert it to that of the robot (located in its center of mass) to avoid inconsistencies. In general, the odometry data (responsible for the location) and the point cloud (responsible for the map) provided by the sensors are adapted, approximated and merged in order to enable simultaneous location and mapping.
As the application of these algorithms can be extremely complex and require a great computational capacity, there are packages, compatible with ROS, pre-programmed and capable of carrying out these tasks in an optimized and "user-friendly" way. The set of packages called ROS Navigation uses SLAM algorithms such as Cartographer (from Google), RTAB, ORB-SLAM, as well as others created by the ROS developers themselves, such as robot_localization, karto, among others. The implementation of these in this programming environment allows not only an autonomous navigation from its own Path Planning, but also standardizes and integrates the control input into the localization and mapping system, thus increasing the quality of the SLAM. In addition, the set of packages has as one of its properties the ability to autonomously avoid collisions with detected objects, which can become especially important when revisiting environments that have objects or animals that move around.
At UFRJ Nautilus, we take the liberty of using some of these algorithms in our own AUV! With a more user-friendly interface, it becomes easier to work with code that would otherwise be extremely complex. With a reliable localization system, and with a differentiated model (read the post about our AUV to learn more), we are able to accomplish much more than a common AUV. We do, however, have plans to carry out our own SLAM system, since the physical models of the pre-existing algorithms are not always compatible with the environment in which our AUV operates. In this way, although we lose the simplicity of manipulation, we will gain much more freedom to adapt SLAM to our needs.
Therefore, whether by ROS or by its own algorithm, SLAM is revealed as one of the main tools in the world of autonomous navigation. The next time the AUV goes sailing in unknown waters, we will not worry, as it will take a map with it wherever it goes.
Written by Gustavo Villela.