MIT Human-Centered Autonomous Vehicle

Building effective, enjoyable, and safe autonomous vehicles is a lot harder than has historically been considered. The reason is that, simply put, an autonomous vehicle must interact with human beings. This interaction is not a robotics problem nor a machine learning problem nor a psychology problem nor an economics problem nor a policy problem. It is all of these problems put into one. It challenges our assumptions about the limitations of human beings at their worst and the capabilities of artificial intelligence systems at their best. The following video introduces the Human-Centered Autonomous Vehicle (HCAV) that we use as an illustrative case study of exploring concepts in shared autonomy:

We propose (see arXiv paper), a set of principles for designing and building autonomous vehicles in a human-centered way that does not run away from the complexity of human nature but instead embraces it. If you find this work useful in your own research, please cite:

@article{fridman2018humancentered,
author = {Lex Fridman},
title =  {Human-Centered Autonomous Vehicle Systems: Principles of Effective Shared Autonomy}
journal = {CoRR},
volume =  {abs/1810.01835},
year =    {2018},
url =     {https://arxiv.org/abs/1810.01835},
archivePrefix = {arXiv},
eprint =  {1810.01835}
}

The 7 principles underlying our work on the human-centered autonomous vehicles are as follows:

  1. Shared Autonomy: Keep the human driver in the loop. The human-machine team must jointly maintain sufficient situation awareness to maintain control of the vehicle. Solve the human-robot interaction problem perfectly and the perception-control problem imperfectly.
  2. Learn from Data: Every vehicle technology should be data-driven. Each should collect edge-case data and continually improve from that data. The overall learning process should seek a scale of data that enables progress away from modular supervised learning formulations toward end-to-end semi-supervised and unsupervised learning formulations.
  3. Human Sensing: Detect driver glance region, cognitive load, activity, hand and body position. Approach the driver state perception problem with equal or greater rigor and scale to the external perception problem.
  4. Shared-Perception Control: Perform scene perception and understanding with the goal of informing the driver of the system capabilities and limitations, not with the goal of perfect black box safe navigation of the vehicle.
  5. Deep Personalization: Every aspect of vehicle operation should be a reflection of the experiences the specific vehicle shares with the driver during their time together. From the first moment the car is driven, it is no longer like any other instance of it in the world.
  6. Imperfect by Design: Focus on communicating how the system sees the world, especially its limitations, instead of focusing on removing those limitations.
  7. System-Level Experience: Optimize both for safety and enjoyment at the system level.

The following is an example snapshot of elevated risk under manual control during a period of frequent off-road glances to the smartphone:

Next, is an example snapshot of elevated risk under machine control in the presence of a pedestrian:

The authors would like to thank the team of engineers and researchers at MIT, Veoneer, and the broader driving and artificial intelligence research community for their valuable feedback and discussions throughout the development of this work.

To get in touch about this research, contact Lex Fridman via email (fridman@mit.edu) or connect on TwitterLinkedInInstagramFacebook, YouTube.