Navigation
Teach and repeat | Visual place recognition | Exploration, Search and Control
We primarily focus on research for mobile robot navigation in complex/uncertain environments without additional navigation infrastructure. Our solutions rely on robot workspace sensing and perception, using embedded features of the workspace as shape, structure, and visual look. We aim mainly at vision-based navigation approaches designed for life-long autonomy, to operate in large-scale environments over extended periods of time, and to be robust to scene variation in shape and structure, as well as to external conditions, including daylight, seasonal, and weather changes. The research is split into three main streams: teach and repeat navigation, visual place recognition, and research related to general tasks in mobile robot autonomy.
Visual Teach and Repeat navigation
Visual Teach and Repeat (VT&R) navigation addresses the task of autonomous navigation alongside a previously visited trajectory. The problem relaxes some of the limitations of traditional SLAM approaches and allows for mapping and navigation in challenging environments unsuitable for traditional approaches. Our research in VT&R is focused on:
- Robustness under challenging environmental conditions – we study approaches suitable for navigation under low visibility, in dynamic environments and in environments with low density of visual features. We also focus on the robustness of the developed methods during perturbations, such as the wake-up robot problem or recovery from sensory deprivation.
- Salient visual feature extraction and exploitation – we explore the benefits of salient and semantic features in visual navigation. This area of research aims to overcome the limitations of traditional visual features such as SIFT, SURF, or ORB.
- Sensoric fusion – although some of our navigation systems are fully vision-based, we also study approaches integrating other sensory information. One of our aims is the development of standalone navigation methods based on different sensors, increasing the modularity and applicability of the developed systems.
Publications
- Kozák, V., Pivoňka, T., Avgoustinakis, P., Majer, L., Kulich, M., Přeučil, L., and Camara, G. L. (2021). Robust Visual Teach and Repeat Navigation for Unmanned Aerial Vehicles. European Conference on Mobile Robots 2021. PDF URL BibTeX
@inproceedings{Kozak2021ecmr, author={Kozák, Viktor and Pivoňka, Tomáš and Avgoustinakis, Pavlos and Majer, Lukáš and Kulich, Miroslav and Přeučil, Libor and Camara, Luis G.}, booktitle={2021 European Conference on Mobile Robots (ECMR)}, title={Robust Visual Teach and Repeat Navigation for Unmanned Aerial Vehicles}, year={2021}, volume={}, number={}, pages={1-7}, doi={10.1109/ECMR50962.2021.9568807} }
- Pivoňka, T., Přeučil, L. (2021). ORB-SLAM2 Based Teach-and-Repeat System. Modelling and Simulation for Autonomous Systems. MESAS 2020. Lecture Notes in Computer Science, vol 12619. Springer, Cham, pp 294-3074.PDF URL BibTeX
@inproceedings{Pivonka2021ORB2TaR, author="Pivo{\v{n}}ka, Tom{\'a}{\v{s}} and P{\v{r}}eu{\v{c}}il, Libor", editor="Mazal, Jan and Fagiolini, Adriano and Vasik, Petr and Turi, Michele", title="ORB-SLAM2 Based Teach-and-Repeat System", booktitle="Modelling and Simulation for Autonomous Systems", year="2021", publisher="Springer International Publishing", address="Cham", pages="294--307" }
- Camara, L. G., Pivoňka, T., Jílek, M., Gäbert, C., Košnar, K., Přeucil, L. (2020) Accurate and Robust Teach and Repeat Navigation by Visual Place Recognition: A CNN Approach. 2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS), pp 6018-6024 URL BibTeX
@inproceedings{camara2020tar, author={Camara, Luis G. and Pivoňka, Tomáš and Jílek, Martin and Gäbert, Carl and Košnar, Karel and Přeučil, Libor}, booktitle={2020 IEEE/RSJ International Conference on Intelligent Robots and Systems (IROS)}, title={Accurate and Robust Teach and Repeat Navigation by Visual Place Recognition: A CNN Approach}, year={2020}, volume={}, number={}, pages={6018-6024}, doi={10.1109/IROS45743.2020.9341764}} }
Visual Place Recognition
The Visual Place Recognition (VPR) task is an indispensable part of numerous applications, such as autonomous driving, robot localization, augmented and virtual reality, or geolocalization. Given a query image, the task is to find its location in relevance to a database containing images of previously visited places. We apply methods based on semantic and spatial matching of high-level CNN features, and take advantage of their high-tier descriptive power, addressing the following challenges:
- Long-term autonomy and environmental changes – we study methods that increase robustness to illumination and structural changes, different seasonal and weather conditions and the presence of dynamic (and changing) objects such as pedestrians, cars, or trees.
- Large scale VPR – depending on the problem domain, the computational and memory requirements may vary significantly. We address the scalability of the developed methods and models and explore support methods increasing the performance of the whole system.
Publications
- Camara, L.G., Přeučil, L. (2020) Visual Place Recognition by spatial matching of high-level CNN features. Robotics and Autonomous Systems. PDF URL BibTeX
@article{CAMARA2020highlevel, title = {{Visual Place Recognition by spatial matching of high-level {CNN} features}}, journal = {Robotics and Autonomous Systems}, volume = {133}, pages = {103625}, year = {2020}, issn = {0921-8890}, author = {Luis G. Camara and Libor Přeučil}, doi={10.1016/j.robot.2020.103625}}
- Camara, L.G., Gäbert, C., Přeučil, L. (2020) Highly Robust Visual Place Recognition Through Spatial Matching of CNN Features. 2020 International Conference on Robotics and Automation (ICRA 2020), Paris. URL BibTeX
@INPROCEEDINGS{Camara2020VPR, author={Camara, Luis G. and Gäbert, Carl and Přeučil, Libor}, booktitle={2020 IEEE International Conference on Robotics and Automation (ICRA)}, title={Highly Robust Visual Place Recognition Through Spatial Matching of CNN Features}, year={2020}, volume={}, number={}, pages={3748-3755}, doi={10.1109/ICRA40945.2020.9196967}}
- Camara. L.G., Přeučil, L. (2019) Spatio-semantic Convnet-based Visual Place Recognition. In 2019 European Conference on Mobile Robots (ECMR), Prague: IEEE, 2019, p. 1-8. ISBN 978-1-7281-3605-9. URL BibTeX
@inproceedings{Camara2019VPR, author={Camara, Luis G. and Přeučil, Libor}, booktitle={2019 European Conference on Mobile Robots (ECMR)}, title={Spatio-Semantic ConvNet-Based Visual Place Recognition}, year={2019}, volume={}, number={}, pages={1-8}, doi={10.1109/ECMR.2019.8870948}}
Exploration, Search, and Control
Mobile robot autonomy contains a large variety of different applications and subproblems. The focus in this area is closely connected with our planning and optimization related research. The developed methods consider limitations resulting from the practical deployment of robots. Depending on the task and the problem domain, we aim to develop and improve advanced methods for:
- Autonomous exploration and mapping – we research exploration strategies and various levels of mapping representations for previously unvisited (unknown) environments
- Surveying and search – we develop complete autonomous solutions supported by advanced methods generating high-quality collision-free trajectories for autonomous surveying (or search) systems.
- Control – system integration and low-level movement control plays an indispensable part in autonomous navigation.
Publications
- Chudoba, J., Kozák, V. Přeučil, L. (2019). MUAVET -- An Experimental Test-Bed for Autonomous Multi-rotor Applications. Modelling and Simulation for Autonomous Systems (MESAS 2018). PDF URL BibTeX
@InProceedings{Chudoba19mesas, author={Chudoba, J. and Kozák, Viktor and Přeučil, Libor}, title={MUAVET -- An Experimental Test-Bed for Autonomous Multi-rotor Applications}, booktitle={Modelling and Simulation for Autonomous Systems}, year={2019}, publisher={Springer International Publishing}, address={Cham, CH}, pages={16--26}, isbn={978-3-030-14984-0}, access = {accepted} }
- Kulich, M., Kozák, V., and Přeučil, L. (2018). An Integrated Approach to Autonomous Environment Modeling. Modelling and Simulation for Autonomous Systems (MESAS 2017), 10756. PDF URL BibTeX
@inproceedings{Kulich18mesas, author = {Kulich, Miroslav and Kozák, Viktor and Přeučil, Libor}, title = {An Integrated Approach to Autonomous Environment Modeling}, booktitle = {Modelling and Simulation for Autonomous Systems (MESAS 2017)}, publisher = {Springer International Publishing AG}, address = {Cham, CH}, year = {2018}, series = {Lecture Notes in Computer Vision}, volume = {10756}, language = {English}, url = {https://link.springer.com/chapter/10.1007/978-3-319-76072-8_1}, doi = {10.1007/978-3-319-76072-8_1}, access = {accepted} }
- Kulich, M., Kozák, V., and Přeučil, L. (2015). Comparison of Local Planning Algorithms for Mobile Robots. ''Modelling and Simulation for Autonomous Systems (Second International Workshop, MESAS 2015). PDF URL BibTeX
@inproceedings{Kulich15mesasComp, author = {Kulich, Miroslav and Kozák, Viktor and Přeučil, Libor}, title = {Comparison of Local Planning Algorithms for Mobile Robots}, booktitle = {Modelling and Simulation for Autonomous Systems (Second International Workshop, MESAS 2015)}, publisher = {Springer International Publishing AG}, address = {Cham, CH}, year = {2015}, language = {English}, url = {http://link.springer.com/chapter/10.1007\%2F978-3-319-22383-4_15}, doi = {10.1007/978-3-319-22383-4_15}, access = {accepted} }
Demos:
Contact: Libor Přeučil