Over View
This project explores how machine learning applied to license plate recognition can be fooled. There is a database known as MNIST which consists of hand written digits . We have tried to train a machine learning model which predicts a targeted class other than the class which it actually belongs to. Similar to how the MNIST model is perturbed so that it predicts a targeted class. Same is being applied to license plates.
We are using the CNN model to train the LPR and the FGSM and JSMA attacks to generate the perturbations and make the classifier misclassify it Later perform optimization process searching for optimal perturbation positions for pasting a sticker or printing the license plate with sticker.
The expected result by the end of the project is :
- Optimal perturbation position -Detect right spot for perturbations.
- Prediction of a class as the targeted class.
Performed steps as of now:
- Have created a ipynb (LPR.ipynb ) has the steps for building a model training and saving the models to h5 files. It also has the perturbation concepts and adverserially created images.
- Went with the process of perturbing the segregated images. Segregated images are individual images that are segregated form license plates .
- They are cleaned filtered and then segregated. This model is adversarially attacked if we can paste the perturbed images onto a valid license plate.
- And we should test the plate with the same model not any other model.
Current Team Members:
Venu Korada
PI: Vahid Behzadan
Tools and Datasets:
N/A – In the future
Publications:
N/A – In the future