It is one thing to develop innovative systems for driving assistance, autonomous robots, and other automated systems, but one way or the other, such systems will have to interact with human beings. Sometimes users of such systems are not particularly trained for the manipulation or understanding of such systems (think about how you learned to use the first cruise control system you had in hand in a car).
Hence the need to design intuitive and easy-to-use functions, and to verify this with on-field tests and studies (how can a driver, a pilot, an operator interacts with the system, reacts to alerts, what is the cognitive effect of a given system on its operator?)
Naturalistic driving studies, teams coordination studies and other behavioral-related studies often require the acquisition and logging of multiple, high-bandwidth, and heterogeneous sensors data in order to capture as much information as possible about the subject(s) of the study (driver, team members, operator…) and the complete surrounding environment. This can be setup with multiple sensors such as video cameras, CAN bus data, oculometers, radars, physiological sensors, audio, analog & digital inputs, etc.
RTMaps allows setting up such applications in a few clicks, both for the acquisition + datalogging and the playback functions which allows offline reproduction and visualization of the sequence of events and data streams captured by the multiple sensors for post-analysis of the test sessions. Advanced datalogging features are also provided by RTMaps such as pre-triggered recording (which allows recording only interesting moments of long acquisition campaign sessions some moments before the event, and again some moments after in order to fully visualize what situation led to a given incident or just event, and what followed) or distributed recording which allows visualizing a scene recorded from different and distant point of views (from inside the vehicle, from the infrastructure, from a vehicle nearby…)
RTMaps also provides interesting specialized features such as interfaces with third-party analysis software like Excel® or Matlab®, interfaces with simulators (like Oktal SCANeR Studio) in order to study human behavior in reproducible or dangerous situations, and “time marker” components (which provide functions to detect specific situations manually or automatically - with scriptable events -, and seek directly to those in playback mode during post analysis), etc...