%0 Journal Article %T An Algorithm for Generating Invisible Data Poisoning Using Adversarial Noise That Breaks Image Classification Deep Learning %A Adrien CHAN-HON-TONG %J - %D 2019 %R https://doi.org/10.3390/make1010011 %X Abstract Today, the main two security issues for deep learning are data poisoning and adversarial examples. Data poisoning consists of perverting a learning system by manipulating a small subset of the training data, while adversarial examples entail bypassing the system at testing time with low-amplitude manipulation of the testing sample. Unfortunately, data poisoning that is invisible to human eyes can be generated by adding adversarial noise to the training data. The main contribution of this paper includes a successful implementation of such invisible data poisoning using image classification datasets for a deep learning pipeline. This implementation leads to significant classification accuracy gaps. View Full-Tex %K deep learning %K data poisoning %K adversarial examples %U https://www.mdpi.com/2504-4990/1/1/11