MIT Autonomous Vehicle Technology Study

It may be several decades before sensors, algorithms, and data collection are sufficiently developed to “solve” the full driving task. Until that time, human beings will remain an integral part of the driving task, monitoring the AI system as it performs anywhere from just over 0% to just under 100% of the driving. We launched the MIT Autonomous Vehicle Technology (MIT-AVT) study to understand, through large-scale real-world driving data collection and large-scale deep learning based parsing of that data, how human-AI interaction in driving can be safe and enjoyable. The emphasis is on objective, data-driven analysis. The following video introduces the study:

See arXiv paper for details on methods of data collection, processing, and study design. If you find this work useful in your own research, please cite:

author = {Lex Fridman, Daniel E. Brown, Michael Glazer, William Angell, Spencer Dodd, Benedikt Jenik, Jack Terwilliger, Aleksandr Patsekin, Julia Kindelsberger, Li Ding, Sean Seaman, Alea Mehler, Andrew Sipperley, Anthony Pettinato, Bobbie Seppelt, Linda Angell, Bruce Mehler, Bryan Reimer},
title =  {{MIT} Autonomous Vehicle Technology Study: Large-Scale Deep Learning Based Analysis of Driver Behavior and Interaction with Automation},
journal = {CoRR},
volume =  {abs/1711.06976},
year =    {2019},
url =     {},
archivePrefix = {arXiv},
eprint =  {1711.06976}

We have instrumented 21 Tesla Model S and Model X vehicles, 2 Volvo S90 vehicles, 2 Range Rover Evoque, and 2 Cadillac CT6 vehicles for both long-term (over a year per driver) and medium term (one month per driver) naturalistic driving data collection. Furthermore, we are continually developing new methods for analysis of the massive-scale dataset collected from the instrumented vehicle fleet. The recorded data streams include IMU, GPS, CAN messages, and high-definition video streams of the driver face, the driver cabin, the forward roadway, and the instrument cluster (on select vehicles). The study is on-going and growing. To date, we have 99 participants, 11,846 days of participation, 405,807 miles, and 5.5 billion video frames.

The backbone of a successful naturalistic driving study is the hardware and low-level software that performs the data collection. In the MIT-AVT study, that role is served by a system named RIDER (Real-time Intelligent Driving Environment Recording system). The following video describes the system:

Building on the robust, reliable, and flexible hardware architecture of RIDER is a vast software framework that handles the recording of raw sensory data and takes that data through many steps across thousands of GPU-enabled compute cores to the extracted knowledge and insights about human behavior in the context of autonomous vehicle technologies. The following figure of the pipeline shows the journey from raw timestamped sensor data to actionable knowledge. The high-level steps are (1) data cleaning and synchronization, (2) automated or semi-automated data annotation, context interpretation, and knowledge extraction, and (3) aggregate analysis and visualization.

This work is supported by the AVT Consortium. Contact Lex Fridman ( for research and engineering related questions and Bryan Reimer ( for consortium related questions.