List view
- No due date•3/3 issues closed
- No due date•3/3 issues closed
- No due date
- No due date•5/5 issues closed
# Self-Driving Car Engineer Nanodegree ## Course 2: Sensor Fusion #### By Jonathan L. Moran (jonathan.moran107@gmail.com) From the Self-Driving Car Engineer Nanodegree programme offered at Udacity. This is Course 2: Sensor Fusion in the Self-Driving Car Engineer Nanodegree programme taught by Dr. Antje Muntzinger and Dr. Andreas Haja. ### Course Objectives * Develop a strong understanding of the important role LiDAR plays in the autonomous vehicle; * Learn to work with LiDAR range data, 3D point clouds and bird's-eye view (BEV) maps; * Build 3D object detection models using deep learning with LiDAR point cloud data; * Perform multi-target tracking with the Extended Kalman Filter (EKF) on multi-modal sensor data; * Apply learnings to complete two real-world AD/ADAS detection and tracking software programmes. ### Projects * ⬜️ 2.1: Object Detection with Sensor Fusion (3D); * ⬜️ 2.2: Multi-Target Tracking with Extended Kalman filter (MTT with EKF). ### Exercises * Will be announced as the course progresses. ### Course Contents The following topics are covered in course exercises: * Extracting and transforming LiDAR range data * Performing vehicle-sensor calibration * Scaling LiDAR range intensity values using heuristic methods * Correcting the azimuth angles using extrinsic calibration matrix * Transforming range images to 3D point clouds * And much more ... (will be announced as course progresses) Other topics covered in course lectures and reading material: * Importance of multi-modal sensors in autonomous vehicles * Scanning / Solid-State / OPA / MEMS / FMCW LiDAR specifications * Time-of-Flight (ToF) LiDAR operating principle and equation (range, power, PBEA) * Light propagation angle for OPA LiDAR systems * Comparing camera / LiDAR / radar / ultrasonics performance * Selecting the best sensor(s) for a given job with constraints * * And much more ... (will be announced as course progresses) ### Learning Outcomes #### Lesson 1: Introduction to Sensor Fusion and Perception * Distinguish the strengths and weaknesses of each sensor modality; * Understand how each sensor contributes to autonomous vehicle perception systems; * Pick the most adequate sensor and model for a particular perception task. #### Lesson 2: The LiDAR Sensor * Explain the role and importance of LiDAR in autonomous driving systems; * Extract LiDAR range data from the Waymo Open Dataset; * Extract LiDAR attributes e.g., point correspondences, effective FOVs and calibration data; * Visualise and scale the LiDAR range and intensity data; * Transform the range data by e.g., cropping to ROIs, converting to 3D point clouds. #### Lesson 3: Detecting Objects in LiDAR Data * Transform 3D point clouds into bird's-eye view (BEV) maps; * Perform model inference using BEV maps; * Visualise the detection results; * Evaluate object detection model performance with metrics; * Experiment with state-of-the-art (SOTA) models and compare their performances. ### Material Syllabus: * [Program Syllabus | Udacity Nanodegree](https://d20vrrgs8k4bvw.cloudfront.net/documents/en-US/Self-Driving+Car+Engineer+Nanodegree+Syllabus+nd0013+.pdf). Literature: * See specific assignments for related literature. Datasets: * [Waymo Open Dataset: Perception](https://waymo.com/open/). Lectures: * Lecture materials (videos, slides) available offline. Course lecture notes available on request. ### Other resources Companion code: * [Sensor Fusion and Tracking | Starter code](https://github.com/udacity/nd013-c2-fusion-starter).
No due date•8/8 issues closedThis is the first course in the Self-Driving Car Engineer Nanodegree programme. #### Projects * ✅ 1.1: [Object Detection in Urban Environments](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/tree/1-1-Object-Detection-2D/1-Computer-Vision/1-1-Object-Detection-in-Urban-Environments). #### Exercises * ✅ 1.1.1: [Choosing Metrics](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-1-1-Choosing-Metrics/2022-07-25-Choosing-Metrics-IoU.ipynb); * ✅ 1.1.2: [Data Acquisition and Visualisation](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-1-2-Data-Acquisition-Visualisation/2022-08-01-Data-Acquisition-Visualisation.ipynb); * ✅ 1.1.3: [Creating TensorFlow TFRecords](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-1-3-Creating-TF-Records/2022-08-03-Creating-TF-Records.ipynb); * ✅ 1.2.1: [Camera Calibration and Distortion Correction](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-2-1-Calibration-Distortion/2022-08-10-Calibration-Distortion-Correction.ipynb); * ✅ 1.2.2: [Image Manipulation and Masking](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-2-2-Image-Manipulation/2022-08-17-Image-Manipulation-Masking.ipynb); * ✅ 1.2.3: [Geometric Transformations](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-2-3-Geometric-Transformations/2022-08-23-Geometric-Transformations-Image-Augmentation.ipynb); * ✅ 1.3.1: [Logistic Regression](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-3-1-Logistic-Regression/2022-08-27-Logistic-Regression.ipynb); * ✅ 1.3.2: [Stochastic Gradient Descent](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-3-2-Stochastic-Gradient-Descent/2022-08-29-Stochastic-Gradient-Descent.ipynb); * ✅ 1.3.3: [Image Classification with Feedforward Neural Networks (FNNs)](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-3-3-Image-Classification-FNNs/2022-09-05-Image-Classification-Feed-Forward-Neural-Networks.ipynb); * ✅ 1.4.1: [Pooling Layers in Convolutional Neural Networks (CNNs)](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-4-1-Pooling-Layers-CNNs/2022-09-07-Pooling-Layers-Convolutional-Neural-Networks.ipynb); * ✅ 1.4.2: [Building Custom Convolutional Neural Networks (CNNs)](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-4-2-Building-Custom-CNNs/2022-09-12-Building-Custom-Convolutional-Neural-Networks.ipynb); * ✅ 1.4.3: [Image Augmentations for the Driving Domain](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-4-3-Image-Augmentations/2022-09-19-Image-Augmentations.ipynb); * ✅ 1.5.1: [Non-Maximum Suppression (NMS) and Soft-NMS](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-5-1-Non-Maximum-Suppression/2022-09-21-Non-Maximum-Suppression.ipynb); * ✅ 1.5.2: [Mean Average Precision (mAP)](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-5-2-Mean-Average-Precision/2022-09-25-Mean-Average-Precision.ipynb); * ✅ 1.5.3: [Learning Rate Schedules and Adaptive Learning Rate methods](https://github.com/jonathanloganmoran/ND0013-Self-Driving-Car-Engineer/blob/main/1-Computer-Vision/Exercises/1-5-3-Learning-Rate-Schedules/2022-09-28-Learning-Rate-Schedules.ipynb).
No due date•15/15 issues closed