A full day tutorial covering all aspects of self-driving. This tutorial will provide the necessary background for understanding the different tasks and associated challenges, the different sensors and data sources one can use and how to exploit them, as well as how to formulate the relevant algorithmic problems such that efficient learning and inference is possible. We will first introduce the self-driving problem setting and the existing solutions both top down from a high level perspective and bottom up from technology and algorithm specific manner in a detailed fashion. We will then extrapolate from the state of the art and discuss where the challenges and open problems are, and where we need to head towards to provide a scalable, safe and affordable self-driving solution.

Program & Recordings

All times are in Eastern Daylight Time (EDT).

Each session will have five minutes of Q&A at the end. Please use the sli.do link or the Zoom Seminar to submit questions

Morning (Eastern US Time)

10:00am 10:30am
Speaker: Raquel Urtasun
  • Modular approaches
  • End-to-end approaches
  • Overview of tutorial
10:30am 11:05am
Speakers: Davi Frossard and Andrei Bârsan
  • Sensor coverage
  • Wheel encoders
  • LiDARs
  • Cameras
  • Microphone
  • Ultrasound
  • GPS
  • Real-Time Kinematic (RTK) Systems
11:05am 11:50am
Speaker: Bin Yang
  • 3D perception from LiDAR
  • 3D perception from Camera
  • Sensor fusion (LiDAR, camera, RADAR, maps)
  • Output representations
  • Open-set perception
  • Latency-aware models
11:50am 12:30pm
Speakers: Sergio Casas and Simon Suo
How do we forecast the future motion for actors in a scene?
  • Prediction in the traditional autonomy pipeline
  • Learning rich representation of the scene
  • Capturing multiple futures
  • Scene-consistent prediction
  • Instance-free perception & prediction
12:30pm 12:40pm

Break ☕

12:40pm 1:30pm
Speakers: Abbas Sadat and Wenyuan Zeng
  • End-to-End (E2E) planning motivation
  • Input modalities
  • Model architectures
  • Output representations
  • Learning paradigms
  • Interpretable neural planners
  • Reactive planning and contingency planning

Afternoon (Eastern US Time)

1:30pm 1:55pm
Speakers: Siva Manivasagam and James Tu
  • Vehicle-to-Vehicle perception
  • Robustness in Vehicle-to-Vehicle communication
1:55pm 2:30pm
Speaker: Sean Segal
  • Self-driving datasets
  • Dataset curation (tagging, coverage, active learning)
  • Evaluation
2:30pm 2:40pm

Break ☕

2:40pm 4:00pm
Speakers: Kelvin Wong, Simon Suo, Siva Manivasagam, Ze Yang, and Shenlong Wang
  • Asset reconstruction (background, rigid, dynamic)
  • Scene Generation
  • Traffic Simulation
  • LiDAR Simulation
  • Image Simulation
4:00pm 4:45pm
Speakers: James Tu and Jingkang Wang
  • Introduction to adversarial examples
  • Adversarial robustness in perception systems
  • Adversarial robustness in V2V communication
  • Generating safety-critical scenarios
4:45pm 4:55pm

Break 🍵

4:55pm 5:55pm
Speakers: Justin Liang, Anqi Joyce Yang, and Quinlan Sykora
  • Topological maps
  • HD maps
  • Simultaneous Localization and Mapping (SLAM)
  • Online mapping
  • Multi-agent routing
5:55pm 6:45pm
Speakers: Shenlong Wang, Julieta Martinez, and Andrei Bârsan
Understand how self-driving vehicles robustly establish their precise position within HD maps in order to leverage them for safe and efficient autonomous driving. Topics:
  • Overview of localization for SDVs
  • Probabilistic formulation & Monte Carlo localization
  • Online localization with maps
  • Joint localization, perception & prediction
  • Global localization