### Keywords Adversarial example , Perturbations ------ ### Summary ##### Introduction * Explain two properties of neural network that cause it to misclassify images and cause difficulty to get solid understanding of network. 1. Theoretical understanding of the individual high level unit of a network and a combination of these units or layers. 2. Understanding the continuity of input - output mapping space and the stability of the output wrt. the input. * Performing a few experiments on different networks and architectures 1. MNIST dataset - Autoencoder , Fully Connected net 2. ImageNet - “AlexNet” 3. 10M youtube images - “QuocNet” ##### Understanding individual units of the Network * Previous work used individual images to maximize the activation value of each feature unit. Similar experiment was done by the authors on the MNIST data set. * The interpretation of the results are as following ; 1. Random direction vector (V) gives rise to similarly interpretable semantic properties. 2. Each feature unit is able to generate invariance on a particular subset of input distribution. https://i.imgur.com/SeyXJeV.png ##### Blind spots in the neural network * Output layers are highly non-linear and are able to give a nonlinear generalization over the input space. * It is possible for the output layers to give non-significant probabilities to regions of the input space that contain no training examples in their vicinity. Ie. It is possible to obtain probability of the different viewpoints of the object without training. * Deep learning kernel methods can't be assumed to have smooth decision boundaries. * Using optimization techniques, small changes to the image can lead to very large deviations in the output * __“Adversarial examples”__ represent pockets or holes in the input-space which are difficult to find simply moving around the input images. ##### Experimental Results * Adversarial examples that are indistinguishable from the actual image can be created for all networks. 1. Cross model generalization : Adversarial images created for one network can affect the other networks also. 2. Cross training generalization https://i.imgur.com/drcGvpz.png ##### Conclusion * Neural network have a counter intuitive properties wrt. the working of the individual units and discontinuities. * Occurance of the adversarial examples and its properties. ----- ### Notes * Feeding adversarial examples during the model training can improve the generalization of the model. * The adversarial examples on the higher layers are more effective than those of input and lower layers. * Adversarial examples affect models trained with different hyper parameters. * According to the the test conducted , autoencoders are more resilient to the adversarial examples. * Deep learning networks which are trained from purely supervised training are unstable to a few particular types of perturbations. Small addition of perturbations to the input leads to large perturbations at the output of the last layers. ### Open research questions  Comparing the effects of adversarial examples on lower layers to that of the higher layers.  Dependence of the adversarial attacks on training data set of the model.  Why the adversarial examples generalize across different hyperparameters or training sets.  How often do adversarial example occur?