Mobile robots are equipped with a multitude of sensors and actuators to perceive their environment and interact with it. They are also built on different architectures (wheeled robots, humanoid robots, aerial drones, marine and submarine, known as UGVs, UAVs, etc.) This makes robotics a multidisciplinary field from mechanics, electronics, sensing, software architecture definition, perception algorithms conception, control, HMI definition, and last but not least… software programming to instanciate and orchestrate all the above.
RTMaps has been a major help for many roboticians to setup mobile fully autonomous, semi-autonomous or remotely-operated robots. The RTMaps modular architecture and the numerous components supporting most of the commonly used sensors and actuators provide not only an abstraction from the hardware layer but a way to rapidly integrate, test, and deploy perception, decision and control algorithms.
Concerning perception functions, RTMaps provides a native capacity to integrate in your application algorithms for image processing, sensors data fusion, SLAM, positioning, etc. that you can develop in C/C++, purchase from third-party specialists, and/or share as binary plugins (with or without source code) with your partners. Sensors data recording and playback capability also allows to work offline on such algorithms without requiring the availability of the real robot (which often spends most of the time open and not functional during development stages).
In order to develop and test control algorithm, it is necessary to close the loop and act on the system. This is of course impossible to do so while working on pre-recorded sensors datasets. However RTMaps provides interfaces to various robot simulators (anyKode Marilou, Webots, Blender/Morse…) which can allow to develop complete applications including decision and control functions without having to connect on a real robot anytime. Once done with a simulated environment, porting to a real robot is easy as it just requires a few clicks to remove the simulator interface components and replace them by the real sensors & actuators interface components.
- Many robotics sensors and actuators supported (Hokuyo and Sick laser scanners, cameras, Kinect, IMUs, I2C sensors, etc.)
- Lightweight runtime engine (can fit into small mobile robots)
- Processing and fusion algorithms development, testing, validation and benchmarking (to be easily integrated via the RTMaps C++ SDK)
- Data timestamping, latencies measurement, downstream resynchronization
- Graphical and/or C++ programming
- Interoperability with robotics simulators (AnyKode Marilou, Blender/Morse,...)
- Interoperability with ROS
- Interoperability with Qt Modeling Language (QML) for quick and easy embedded HMI developments
- Data streams Record / Playback capability for offline developments and validation as well as robot behaviour analysis
- RTMaps applications can be integrated in third-party software written in C/C++, C#, Java, Python and QML
SOME APPLICATION EXAMPLES
The ARGOS challenge
The VIKINGS team from IRSEEM wins the 2015 edition of the ARGOS Challenge
SLAM-based applications (localization and mapping) - Ecole des Mines-Paristech (Center of Robotics)
In order to help improving the capacities of robots for localization, mapping and terrain analysis, the ANR and DGA have intiated the CAROTTE challenge. 5 teams have been selected for that challenge, and CoreBots is one of them (Ecole des Mines-Paristech, Intempora, INRIA, Epitech). Each team has to develop an autonomous robot able to navigate in an indoor environment and to recognize objects in the rooms in order to build a map of the environment with semantic information.
Integration of SRI Karto SLAM in the RTMaps suite. This demonstration was made during the recent IROS 2011 conference in San Francisco. The robot had a Velodyne HDL-32 laser, a Microsoft Kinect and was able to build 2D and 3D model of its environment.