Skip to content

Adversarial Machine Learning Attacks in Scaled Self-Driving Cars is the topic of my Ms thesis research at the University of Tartu

License

Notifications You must be signed in to change notification settings

mikecamara/adversarial-machine-learning-attacks

Repository files navigation

Using Adversarial Defense Methods to Improve the Performance of Deep-Neural-Network-Controlled Automatic Driving Systems

Master’s Thesis (30 ECTS) - University of Tartu and TalTech - Software Engineering Curriculum

Eperiment Video Demo

Useful links

Topic - ADS

To create Automatic Driving Systems (ADS), engineers rely on a range of sensors attached to a vehicle or a robot, including radar, lidar, high definition cameras, and GPS. However, creating an automatic driving system entirely controlled by a Neural Neural network using a single RGB camera is also possible.

The process is called behavioral cloning and involves training a Convolutional Neural Network (CNN) with many good driving examples. In other words, an expert operates the vehicle, collecting images that are labeled with throttle and steering values. This data is then used to train a CNN model capable of automatically driving in the route it has been trained.

The problem

The problem with this approach is that the neural networks tend to overfit the training data. It performs well in the conditions exposed during training, including the lighting conditions, and performs poorly in never-seen-before lighting conditions. So, a CNN trained with images captured in the daytime is unlikely to perform well at night. To solve this issue, engineers have to spend a lot of time collecting data in different lighting conditions, but this is not always possible because of time constraints or weather conditions. Thus, new training methods must be investigated.

Reseach question

Can adversarial defense training methods improve the neural network’s generalization skills to unseen lighting conditions?

Hypothesis

The hypothesis is that the CNN models trained with adversarial machine learning defense methods will perform better in unseen lighting conditions than CNN models trained with standard procedures.

Results

The CNN models trained with adversarial machine learning defense methods did perform better in never-seen-before lighting conditions than CNN models trained with standard procedures.

The models M-TS and M-TL, trained with standard methods, failed to generalize to the unseen higher lighting (H), having several collisions against the wall. On the other hand, the models trained with adversarial methods performed well in the unseen higher lighting (H), completing two laps without collisions.

results


License

The content of this project itself is licensed under the Creative Commons Attribution 3.0 Unported license, and the underlying source code used to format and display that content is licensed under the MIT license.

Releases

No releases published

Packages

No packages published

Languages