Skip to content

Commit 920a9e2

Browse files
AleksMattensorflow-copybara
authored andcommitted
Replace unicode escaped characters in ipynb files
PiperOrigin-RevId: 854258501
1 parent c60d48b commit 920a9e2

7 files changed

Lines changed: 86 additions & 86 deletions

File tree

g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb

Lines changed: 15 additions & 15 deletions
Original file line numberDiff line numberDiff line change
@@ -46,20 +46,20 @@
4646
"id": "wfqlePz0g6o5"
4747
},
4848
"source": [
49-
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
50-
" \u003ctd\u003e\n",
51-
" \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
52-
" \u003c/td\u003e\n",
53-
" \u003ctd\u003e\n",
54-
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
55-
" \u003c/td\u003e\n",
56-
" \u003ctd\u003e\n",
57-
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
58-
" \u003c/td\u003e\n",
59-
" \u003ctd\u003e\n",
60-
" \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n",
61-
" \u003c/td\u003e\n",
62-
"\u003c/table\u003e"
49+
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
50+
" <td>\n",
51+
" <a target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/adversarial_keras_cnn_mnist\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
52+
" </td>\n",
53+
" <td>\n",
54+
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
55+
" </td>\n",
56+
" <td>\n",
57+
" <a target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
58+
" </td>\n",
59+
" <td>\n",
60+
" <a href=\"https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/adversarial_keras_cnn_mnist.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
61+
" </td>\n",
62+
"</table>"
6363
]
6464
},
6565
{
@@ -401,7 +401,7 @@
401401
" x = tf.keras.layers.Conv2D(\n",
402402
" num_filters, hparams.kernel_size, activation='relu')(\n",
403403
" x)\n",
404-
" if i \u003c len(hparams.conv_filters) - 1:\n",
404+
" if i < len(hparams.conv_filters) - 1:\n",
405405
" # max pooling between convolutional layers\n",
406406
" x = tf.keras.layers.MaxPooling2D(hparams.pool_size)(x)\n",
407407
" x = tf.keras.layers.Flatten()(x)\n",

g3doc/tutorials/graph_keras_lstm_imdb.ipynb

Lines changed: 21 additions & 21 deletions
Original file line numberDiff line numberDiff line change
@@ -39,23 +39,23 @@
3939
"source": [
4040
"# Graph regularization for sentiment classification using synthesized graphs\n",
4141
"\n",
42-
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
43-
" \u003ctd\u003e\n",
44-
" \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
45-
" \u003c/td\u003e\n",
46-
" \u003ctd\u003e\n",
47-
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
48-
" \u003c/td\u003e\n",
49-
" \u003ctd\u003e\n",
50-
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
51-
" \u003c/td\u003e\n",
52-
" \u003ctd\u003e\n",
53-
" \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n",
54-
" \u003c/td\u003e\n",
55-
" \u003ctd\u003e\n",
56-
" \u003ca href=\"https://tfhub.dev/\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" /\u003eSee TF Hub model\u003c/a\u003e\n",
57-
" \u003c/td\u003e\n",
58-
"\u003c/table\u003e"
42+
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
43+
" <td>\n",
44+
" <a target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_lstm_imdb\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
45+
" </td>\n",
46+
" <td>\n",
47+
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
48+
" </td>\n",
49+
" <td>\n",
50+
" <a target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
51+
" </td>\n",
52+
" <td>\n",
53+
" <a href=\"https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/graph_keras_lstm_imdb.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
54+
" </td>\n",
55+
" <td>\n",
56+
" <a href=\"https://tfhub.dev/\"><img src=\"https://www.tensorflow.org/images/hub_logo_32px.png\" />See TF Hub model</a>\n",
57+
" </td>\n",
58+
"</table>"
5959
]
6060
},
6161
{
@@ -314,10 +314,10 @@
314314
"\n",
315315
" # The first indices are reserved\n",
316316
" word_index = {k: (v + 3) for k, v in word_index.items()}\n",
317-
" word_index['\u003cPAD\u003e'] = 0\n",
318-
" word_index['\u003cSTART\u003e'] = 1\n",
319-
" word_index['\u003cUNK\u003e'] = 2 # unknown\n",
320-
" word_index['\u003cUNUSED\u003e'] = 3\n",
317+
" word_index['<PAD>'] = 0\n",
318+
" word_index['<START>'] = 1\n",
319+
" word_index['<UNK>'] = 2 # unknown\n",
320+
" word_index['<UNUSED>'] = 3\n",
321321
" return dict((value, key) for (key, value) in word_index.items())\n",
322322
"\n",
323323
"reverse_word_index = build_reverse_word_index()\n",

g3doc/tutorials/graph_keras_mlp_cora.ipynb

Lines changed: 14 additions & 14 deletions
Original file line numberDiff line numberDiff line change
@@ -46,20 +46,20 @@
4646
"id": "pL9fF9FWI-Q1"
4747
},
4848
"source": [
49-
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
50-
" \u003ctd\u003e\n",
51-
" \u003ca target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_mlp_cora\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" /\u003eView on TensorFlow.org\u003c/a\u003e\n",
52-
" \u003c/td\u003e\n",
53-
" \u003ctd\u003e\n",
54-
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
55-
" \u003c/td\u003e\n",
56-
" \u003ctd\u003e\n",
57-
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
58-
" \u003c/td\u003e\n",
59-
" \u003ctd\u003e\n",
60-
" \u003ca href=\"https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/download_logo_32px.png\" /\u003eDownload notebook\u003c/a\u003e\n",
61-
" \u003c/td\u003e\n",
62-
"\u003c/table\u003e"
49+
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
50+
" <td>\n",
51+
" <a target=\"_blank\" href=\"https://www.tensorflow.org/neural_structured_learning/tutorials/graph_keras_mlp_cora\"><img src=\"https://www.tensorflow.org/images/tf_logo_32px.png\" />View on TensorFlow.org</a>\n",
52+
" </td>\n",
53+
" <td>\n",
54+
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
55+
" </td>\n",
56+
" <td>\n",
57+
" <a target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
58+
" </td>\n",
59+
" <td>\n",
60+
" <a href=\"https://storage.googleapis.com/tensorflow_docs/neural-structured-learning/g3doc/tutorials/graph_keras_mlp_cora.ipynb\"><img src=\"https://www.tensorflow.org/images/download_logo_32px.png\" />Download notebook</a>\n",
61+
" </td>\n",
62+
"</table>"
6363
]
6464
},
6565
{

neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb

Lines changed: 23 additions & 23 deletions
Original file line numberDiff line numberDiff line change
@@ -38,16 +38,16 @@
3838
"source": [
3939
"# Adversarial Learning: Building Robust Image Classifiers\n",
4040
"\n",
41-
"\u003cbr\u003e\n",
41+
"<br>\n",
4242
"\n",
43-
"\u003ctable class=\"tfo-notebook-buttons\" align=\"left\"\u003e\n",
44-
" \u003ctd\u003e\n",
45-
" \u003ca target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" /\u003eRun in Google Colab\u003c/a\u003e\n",
46-
" \u003c/td\u003e\n",
47-
" \u003ctd\u003e\n",
48-
" \u003ca target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"\u003e\u003cimg src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" /\u003eView source on GitHub\u003c/a\u003e\n",
49-
" \u003c/td\u003e\n",
50-
"\u003c/table\u003e"
43+
"<table class=\"tfo-notebook-buttons\" align=\"left\">\n",
44+
" <td>\n",
45+
" <a target=\"_blank\" href=\"https://colab.research.google.com/github/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"><img src=\"https://www.tensorflow.org/images/colab_logo_32px.png\" />Run in Google Colab</a>\n",
46+
" </td>\n",
47+
" <td>\n",
48+
" <a target=\"_blank\" href=\"https://github.com/tensorflow/neural-structured-learning/blob/master/neural_structured_learning/examples/notebooks/adversarial_cnn_transfer_learning_fashionmnist.ipynb\"><img src=\"https://www.tensorflow.org/images/GitHub-Mark-32px.png\" />View source on GitHub</a>\n",
49+
" </td>\n",
50+
"</table>"
5151
]
5252
},
5353
{
@@ -71,7 +71,7 @@
7171
"The most popular deep learning models leveraged for computer vision problems are convolutional neural networks (CNNs)!\n",
7272
"\n",
7373
"![](https://i.imgur.com/32WEbHg.png)\n",
74-
"\u003cfont size=2\u003eCreated by: Dipanjan Sarkar\u003c/font\u003e\n",
74+
"<font size=2>Created by: Dipanjan Sarkar</font>\n",
7575
"\n",
7676
"We will look at how we can build, train and evaluate a multi-class CNN classifier in this notebook and also perform adversarial learning.\n",
7777
"\n",
@@ -81,7 +81,7 @@
8181
"The idea is to leverage a pre-trained model instead of building a CNN from scratch in our image classification problem\n",
8282
"\n",
8383
"![](https://i.imgur.com/WcUabml.png)\n",
84-
"\u003cfont size=2\u003eSource: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)\u003c/font\u003e\n",
84+
"<font size=2>Source: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)</font>\n",
8585
"\n",
8686
"## Tutorial Outline\n",
8787
"\n",
@@ -145,7 +145,7 @@
145145
"id": "BNWq4-tI3MyT"
146146
},
147147
"source": [
148-
"# Main Objective — Building an Apparel Classifier \u0026 Performing Adversarial Learning \n",
148+
"# Main Objective — Building an Apparel Classifier & Performing Adversarial Learning \n",
149149
"\n",
150150
"- We will keep things simple here with regard to the key objective. We will build a simple apparel classifier by training models on the very famous [Fashion MNIST](https://github.com/zalandoresearch/fashion-mnist) dataset based on Zalando’s article images — consisting of a training set of 60,000 examples and a test set of 10,000 examples. Each example is a 28x28 grayscale image, associated with a label from 10 classes. The task is to classify these images into an apparel category amongst 10 categories on which we will be training our models on.\n",
151151
"\n",
@@ -155,15 +155,15 @@
155155
"\n",
156156
"Here's an example how the data looks (each class takes three-rows):\n",
157157
"\n",
158-
"\u003ctable\u003e\n",
159-
" \u003ctr\u003e\u003ctd\u003e\n",
160-
" \u003cimg src=\"https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png\"\n",
161-
" alt=\"Fashion MNIST sprite\" width=\"600\"\u003e\n",
162-
" \u003c/td\u003e\u003c/tr\u003e\n",
163-
" \u003ctr\u003e\u003ctd align=\"center\"\u003e\n",
164-
" \u003ca href=\"https://github.com/zalandoresearch/fashion-mnist\"\u003eFashion-MNIST samples\u003c/a\u003e (by Zalando, MIT License).\u003cbr/\u003e\u0026nbsp;\n",
165-
" \u003c/td\u003e\u003c/tr\u003e\n",
166-
"\u003c/table\u003e\n",
158+
"<table>\n",
159+
" <tr><td>\n",
160+
" <img src=\"https://raw.githubusercontent.com/zalandoresearch/fashion-mnist/master/doc/img/fashion-mnist-sprite.png\"\n",
161+
" alt=\"Fashion MNIST sprite\" width=\"600\">\n",
162+
" </td></tr>\n",
163+
" <tr><td align=\"center\">\n",
164+
" <a href=\"https://github.com/zalandoresearch/fashion-mnist\">Fashion-MNIST samples</a> (by Zalando, MIT License).<br/>&nbsp;\n",
165+
" </td></tr>\n",
166+
"</table>\n",
167167
"\n",
168168
"Fashion MNIST is intended as a drop-in replacement for the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset—often used as the \"Hello, World\" of machine learning programs for computer vision. You can access the Fashion MNIST dataset directly from TensorFlow.\n",
169169
"\n",
@@ -230,7 +230,7 @@
230230
"## Model Architecture Details\n",
231231
"\n",
232232
"![](https://i.imgur.com/1VZ7MlO.png)\n",
233-
"\u003cfont size=2\u003eSource: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)\u003c/font\u003e"
233+
"<font size=2>Source: [CNN Essentials](https://github.com/dipanjanS/convolutional_neural_networks_essentials/tree/master/presentation)</font>"
234234
]
235235
},
236236
{
@@ -563,7 +563,7 @@
563563
"Adversarial examples are specialised inputs created with the purpose of confusing a neural network, resulting in the misclassification of a given input. These notorious inputs are indistinguishable to the human eye, but cause the network to fail to identify the contents of the image. There are several types of such attacks, however, here the focus is on the fast gradient sign method attack, which is a *white box* attack whose goal is to ensure misclassification. A white box attack is where the attacker has complete access to the model being attacked. One of the most famous examples of an adversarial image shown below is taken from the aforementioned paper.\n",
564564
"\n",
565565
"![Adversarial Example](https://i.imgur.com/FyYq2Q0.png)\n",
566-
"\u003cfont size=2\u003eSource: [Explaining and Harnessing Adversarial Examples, Goodfellow et al., 2014](https://arxiv.org/abs/1412.6572)\u003c/font\u003e\n",
566+
"<font size=2>Source: [Explaining and Harnessing Adversarial Examples, Goodfellow et al., 2014](https://arxiv.org/abs/1412.6572)</font>\n",
567567
"\n",
568568
"Here, starting with the image of a panda, the attacker adds small perturbations (distortions) to the original image, which results in the model labelling this image as a gibbon, with high confidence. The process of adding these perturbations is explained below.\n",
569569
"\n",

0 commit comments

Comments
 (0)