Exploration Algorithm in a Polygonal Domain

EAPD (Exploration Algorithm in a Polygonal Domain) aims to provide an tool for exploration of an unknown area by a mobile robot. By exploration we understand a process of autonomous navigation of a robot in an environment, which map is not known a priory in order to built it. The exploration algorithm is an iterative procedure consisting of determination of a new goal and a navigation towards this goal. Such an algorithm is terminated whenever a complete map of the environment is built. A natural condition is to perform exploration with an expected minimal usage of resources, e.g., trajectory length, time of search, or energy consumption.

The exploration algorithm consists of several steps that are repeated until some unexplored area remain. The process starts with reading actual sensor information by the robot. After some data processing, the existing map is updated with this information. New goal candidates are then determined and the next goal for is assigned using a defined cost function. Having assigned the goal, the shortest path from the current robot position to the goal is found and the robot is navigated along this path finally. We follow Yamauchi's frontier based approach, which assumes that the next best view (goal) lies on the border between free and unexplored areas (this border is called frontier).

  • Kulich, M., Kozák, V., and Přeučil, L. (2018). An Integrated Approach to Autonomous Environment Modeling. Modelling and Simulation for Autonomous Systems (MESAS 2017), 10756.
      author = {Kulich, Miroslav and Kozák, Viktor and Přeučil, Libor},
      title = {An Integrated Approach to Autonomous Environment Modeling},
      booktitle = {Modelling and Simulation for Autonomous Systems (MESAS 2017)},
      publisher = {Springer International Publishing AG},
      address = {Cham, CH},
      year = {2018},
      series = {Lecture Notes in Computer Vision},
      volume = {10756},
      language = {English},
      url = {https://link.springer.com/chapter/10.1007/978-3-319-76072-8_1},
      doi = {10.1007/978-3-319-76072-8_1},
      access = {accepted}

The approach

Our solution is based on processing of RGB-D data provided by a MS Kinect2 sensor, which allows to take advantage of some state-of-the-art SLAM (Simultaneous Localization and Mapping) algorithm and to build a realistic 3D maps of the environment with projected visual information about the scene.

On the other hand, the robot moves on a plain and thus a exploration can be done in 2D. For this, we employ a polygonal map, which is built on the fly from the scratch. More precisely, information about the environment is approximated by a polygon with holes (i.e., the outer polygon representing a border of the working area and containing obstacles - holes). To do that, a 2D scan is simulated from a single measurement of the MS Kinect2 and approximated with a polygon by a combination of well know Successive Edge Following, Ramer-Douglas, and Least-Squares Fit algorithms. Particular polygons representing scans are then combined together by Vatti algorithm, which was modified to deal with different types of polygon edges: those representing obstacles and other ones standing for frontiers (i.e. edges between an empty and unknown space). In our implementation, we modified the Clipper library, which is an open-source polygon clipping library based on Vatti clipping algorithm. The library performs Boolean clipping operations - intersection, union, difference, and XOR. Moreover, it performs polygon offsetting.

The map is used for goal candidates selection, planning, and evaluation of the candidates in each exploration step. The selection process determines the candidates so that they lie on frontiers, the distance of a candidate to its neighbors is twice a sensor range, and all frontiers are inside the union of circles with centers in the candidates and radius equal to the sensor range. This guaranties that all frontiers will be explored (i.e., it will be detected whether a frontier lies in a free space or in any obstacle) after visiting all candidates.

Planning consists of two steps: a visibility graph is computed first, followed by a run of Dijkstra algorithm, which computes shortest distances among the robot and each goal candidate. The cost function for evaluation of goal candidates is then simply this distance. As a goal selection, a greedy approach is used, the nearest candidate is selected as the next goal.

To provide a correct pose of the robot, a RGB-D Graph-Based SLAM approach based on an incremental appearance-based loop closure detector is employed. This approach, moreover, builds a colored three dimensional model of the environment, which is a final product of the exploration.


The software part of the system has been realized as a set of ROS (Robot Operating System) nodes. Reading data from the Kinect sensor is done through the libfreenect2 library, which provides basic drivers for Kinect, while connection to ROS is provided by Kinect2 Bridge library.

As the exploration algorithm works in 2D, 3D Kinect data are transformed into plane, i.e. a laser scan is simulated using the Kinect2Laser node. In parallel, raw Kinect data (together with robot odometry) are passed to the SLAM node (we employ RTAB-Map), which (besides creating of 3D map) provides the current robot position and a corrected 2D scan.

This information goes to the core EAPD module consisting of three nodes. Map uses laser scans together with transformations from the robots and creates a polygonal map of the environment. That map is published for the planner and visualization. Planner employes the provided map and robot position, determines the next goal, and plans a path the this goal. The Robot node processes robot positions and provides them to Map and Planner. It also controls the robot according to the plan from Planner using Smooth Nearnest Diagram approach. Finally, ER1 node mediates communication with the real robot.

The detailed scheme of the data transfer between all software modules is provided in Figure bellow.


A modified version of ER1 robot by Evolution Robotics was used as a robotic platform for experiments. This robot operates on flat terrains without possibility to overcome even small obstacles or terrain roughness. On the other hand, precision of odometry is relatively high (assuming no slippage or skid appears).

All computations are done on a single computer placed directly on the robot. Since processing of RGB-D point clouds (mainly the SLAM) are very demanding, the Intel NUC mini computer with Core i5 processor is used. Both Kinect 2 and ER1 robot are connected to the NUC computer using USB interface (Kinect 2 via USB 3.0).


Bellow you can find some images from experiments.

Page last modified on January 03, 2024, at 11:34 AM EST
Powered by PmWiki