|
38 | 38 | "source": [ |
39 | 39 | "# Adversarial Learning: Building Robust Image Classifiers\n", |
40 | 40 | "\n", |
41 | | - "\u003cbr\u003e\n", |
| 41 | + "<br>\n", |
42 | 42 | "\n", |
43 | | - "\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n", |
44 | | - " \u003ctd\u003e\n", |
45 | | - " \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n", |
46 | | - " \u003c/td\u003e\n", |
47 | | - " \u003ctd\u003e\n", |
48 | | - " \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n", |
49 | | - " \u003c/td\u003e\n", |
50 | | - "\u003c/table\u003e" |
| 43 | + "<table class=\"tfo-notebook-buttons\" align=\"left\">\n", |
| 44 | + " <td>\n", |
| 45 | + " <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n", |
| 46 | + " </td>\n", |
| 47 | + " <td>\n", |
| 48 | + " <a target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n", |
| 49 | + " </td>\n", |
| 50 | + "</table>" |
51 | 51 | ] |
52 | 52 | }, |
53 | 53 | { |
|
71 | 71 | "The most popular deep learning models leveraged for computer vision problems are convolutional neural networks (CNNs)!\n", |
72 | 72 | "\n", |
73 | 73 | "\n", |
74 | | - "\u003cfont size=2\u003eCreated by: Dipanjan Sarkar\u003c/font\u003e\n", |
| 74 | + "<font size=2>Created by: Dipanjan Sarkar</font>\n", |
75 | 75 | "\n", |
76 | 76 | "We will look at how we can build, train and evaluate a multi-class CNN classifier in this notebook and also perform adversarial learning.\n", |
77 | 77 | "\n", |
|
81 | 81 | "The idea is to leverage a pre-trained model instead of building a CNN from scratch in our image classification problem\n", |
82 | 82 | "\n", |
83 | 83 | "\n", |
84 | | - "\u003cfont size=2\u003eSource: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)\u003c/font\u003e\n", |
| 84 | + "<font size=2>Source: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)</font>\n", |
85 | 85 | "\n", |
86 | 86 | "## Tutorial Outline\n", |
87 | 87 | "\n", |
|
145 | 145 | "id": "BNWq4-tI3MyT" |
146 | 146 | }, |
147 | 147 | "source": [ |
148 | | - "# Main Objective — Building an Apparel Classifier \u0026 Performing Adversarial Learning \n", |
| 148 | + "# Main Objective — Building an Apparel Classifier & Performing Adversarial Learning \n", |
149 | 149 | "\n", |
150 | 150 | "- We will keep things simple here with regard to the key objective. We will build a simple apparel classifier by training models on the very famous [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset based on Zalando’s article images — consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The task is to classify these images into an apparel category amongst 10 categories on which we will be training our models on.\n", |
151 | 151 | "\n", |
|
155 | 155 | "\n", |
156 | 156 | "Here's an example how the data looks (each class takes three-rows):\n", |
157 | 157 | "\n", |
158 | | - "\u003ctable\u003e\n", |
159 | | - " \u003ctr\u003e\u003ctd\u003e\n", |
160 | | - " \u003cimg src=\"https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png\"\n", |
161 | | - " alt=\"Fashion MNIST sprite\" width=\"600\"\u003e\n", |
162 | | - " \u003c/td\u003e\u003c/tr\u003e\n", |
163 | | - " \u003ctr\u003e\u003ctd align=\"center\"\u003e\n", |
164 | | - " \u003ca href=\"https://github.com/zalandoresearch/fashion-mnist\"\u003eFashion-MNIST samples\u003c/a\u003e (by Zalando, MIT License).\u003cbr/\u003e\u0026nbsp;\n", |
165 | | - " \u003c/td\u003e\u003c/tr\u003e\n", |
166 | | - "\u003c/table\u003e\n", |
| 158 | + "<table>\n", |
| 159 | + " <tr><td>\n", |
| 160 | + " <img src=\"https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png\"\n", |
| 161 | + " alt=\"Fashion MNIST sprite\" width=\"600\">\n", |
| 162 | + " </td></tr>\n", |
| 163 | + " <tr><td align=\"center\">\n", |
| 164 | + " <a href=\"https://github.com/zalandoresearch/fashion-mnist\">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/> \n", |
| 165 | + " </td></tr>\n", |
| 166 | + "</table>\n", |
167 | 167 | "\n", |
168 | 168 | "Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the \"Hello, World\" of machine learning programs for computer vision. You can access the Fashion MNIST dataset directly from TensorFlow.\n", |
169 | 169 | "\n", |
|
230 | 230 | "## Model Architecture Details\n", |
231 | 231 | "\n", |
232 | 232 | "\n", |
233 | | - "\u003cfont size=2\u003eSource: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)\u003c/font\u003e" |
| 233 | + "<font size=2>Source: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)</font>" |
234 | 234 | ] |
235 | 235 | }, |
236 | 236 | { |
|
563 | 563 | "Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is a *white box* attack whose goal is to ensure misclassification. A white box attack is where the attacker has complete access to the model being attacked. One of the most famous examples of an adversarial image shown below is taken from the aforementioned paper.\n", |
564 | 564 | "\n", |
565 | 565 | "\n", |
566 | | - "\u003cfont size=2\u003eSource: [Explaining and Harnessing Adversarial Examples, Goodfellow et al., 2014](https://arxiv.org/abs/1412.6572)\u003c/font\u003e\n", |
| 566 | + "<font size=2>Source: [Explaining and Harnessing Adversarial Examples, Goodfellow et al., 2014](https://arxiv.org/abs/1412.6572)</font>\n", |
567 | 567 | "\n", |
568 | 568 | "Here, starting with the image of a panda, the attacker adds small perturbations (distortions) to the original image, which results in the model labelling this image as a gibbon, with high confidence. The process of adding these perturbations is explained below.\n", |
569 | 569 | "\n", |
|
0 commit comments