path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
Class/Data Structure .ipynb | ###Markdown
Fundamental Data Structure :The fundamental data structure in python includes - **Primitive type** ( ***Integer, Float, String***, and ***Boolean***) and - **Non-Primitive type** ( ***Array, List, Tuples, Dictionary, Set***, and ***File***) In this tutorial, we are going to discudd about List, Tuples, Set and Dictionary. ListList is built in data structure in python. It is - Mutable i.e., we can change or edite the size of the list by appending, inserting and deleting the elements.- List can hold heterogeneous objects (e.g., integer, string, boolean)Lets try to understand the List: - To initiate a blank List.
###Code
l = []
###Output
_____no_output_____
###Markdown
- To find the type of the object.
###Code
type(l)
###Output
_____no_output_____
###Markdown
- To create a list from scratch.
###Code
L = [1,2,3,4,5,6,342,34]
L
###Output
_____no_output_____
###Markdown
- Indexing of list.
###Code
L[0],L[1],L[5]
###Output
_____no_output_____
###Markdown
- Revers indexing is also possible.
###Code
L[-1],L[-2],L[-3]
###Output
_____no_output_____
###Markdown
- To find the length of list.
###Code
len(L)
###Output
_____no_output_____
###Markdown
- To add the element from last.
###Code
L.append(12)
L
###Output
_____no_output_____
###Markdown
- To find the sum of the elements (if they are of same types like int. double etc)
###Code
sum(L)
###Output
_____no_output_____
###Markdown
- To find maximum and minimum of the list
###Code
max(L), min(L)
###Output
_____no_output_____
###Markdown
- To create a list of heterogeneous element types.
###Code
L = [1,2.0,3,4,5,"Apple",True, False]
###Output
_____no_output_____
###Markdown
- To find the type of elements of a list.
###Code
type(L[1]),type(L[5])
###Output
_____no_output_____
###Markdown
- To create a list of list.
###Code
L = [[1,2,3],[3,4,5],[5,7,9]]
###Output
_____no_output_____
###Markdown
- To find list inside a list.
###Code
L[0]
L[0][1]
###Output
_____no_output_____
###Markdown
- To add two list. It is not as ususal addition. The elements are accumulated.
###Code
L1 = [1,2,3] ; L2 = [2,4,6]
L1+L2, set(L1+L2)
###Output
_____no_output_____
###Markdown
- To add element from end of the list
###Code
L = [1,4,2,3,5,6,7]
L.append(100)
L
###Output
_____no_output_____
###Markdown
- To insert element (100) at specific index (1)
###Code
L = [1,4,2,3,5,6,7]
L.insert(1,100)
L
###Output
_____no_output_____
###Markdown
- To remove specific element form list. It will remove the first occurance.
###Code
L = [1,4,2,3,5,6,7,4]
L.remove(4)
L
###Output
_____no_output_____
###Markdown
- To remove the element from specific index
###Code
x=[43,23,12,56,78,89,90]
x.pop(-4)
x
L = [1,4,2,3,5,6,7]
L.pop(-1)
L
###Output
_____no_output_____
###Markdown
- To sort the list
###Code
L = [1,10,2,30,5,60,7]
L.sort()
L
###Output
_____no_output_____
###Markdown
To reverse the list
###Code
L = [1,4,2,3,5,6,7]
L.reverse()
L
###Output
_____no_output_____
###Markdown
- List comprehension
###Code
L = [x for x in range(100)]
print(L)
L = [x for x in range(100) if x%2==0]
print(L)
import random as rn
rn.randint(0,100)
import random as rn
R = [rn.randint(0,50) for k in range(200)]
print(R)
import collections
#High Performance Counting
C = collections.Counter(R)
print(C)
R = [rn.choice(['A','T','G','C']) for i in range(200)]
print(R)
DNA = ''.join(R)
DNA
DNA.count('A'), DNA.count('AT'), DNA.count('ATG')
###Output
_____no_output_____
###Markdown
Mini Assignment:Create a DNA string of 10,000 characters and count the following: A,T,G,C,all combination of two charaters, all combinations of three characters. TuplesTuples are non-mutable, which means we can ot add or remove elements once tuple is defind. - To define a tuples from scratch
###Code
t = (2,3,4,5)
###Output
_____no_output_____
###Markdown
- Find type
###Code
type(t)
###Output
_____no_output_____
###Markdown
- Indexing
###Code
t[1]
L = [(1,2),(2,3),(3,4)]
L[0][0]
###Output
_____no_output_____
###Markdown
- Create a list of tuples
###Code
L = [(1,2),("a","b"),(True, False)]
L
###Output
_____no_output_____
###Markdown
DictionaryDictionary organizes the data with key-value pair. Dictionary can be nested with other data types. - To initiate a dictionary
###Code
D = dict()
DD = {}
###Output
_____no_output_____
###Markdown
- Create a dictionary from scratch
###Code
D = {"fruit":'apple',
"vegetable" : 'carrot',
"rice": 2.0,
'milk': 10,}
###Output
_____no_output_____
###Markdown
- What are keys?
###Code
D.keys()
###Output
_____no_output_____
###Markdown
- What are values?
###Code
D.values()
###Output
_____no_output_____
###Markdown
- Indexing
###Code
D['fruit'], D["rice"]
###Output
_____no_output_____
###Markdown
- Iteration over key and values
###Code
for key,value in D.items():
print(key,value)
###Output
fruit apple
vegetable carrot
rice 2.0
milk 10
###Markdown
- To update a dictionary
###Code
D.update({"salt": 2.0})
D
###Output
_____no_output_____
###Markdown
- To create a list form a Dictionary. Only keys are collected.
###Code
list(D)
###Output
_____no_output_____
###Markdown
- To create a list of keys only
###Code
list(D.keys())
###Output
_____no_output_____
###Markdown
- To create a list of values
###Code
list(D.values())
###Output
_____no_output_____
###Markdown
- To create Dictionary of with list, tuples and dictionary
###Code
DD = {"names":("John","Harry", "Brat"),\
"roll no": [1,2,3],\
"plan":{"first":[12,34,56],"second":[1,3,5]}}
DD
import numpy as np
X = np.arange(0,np.pi,0.1)
print(X)
import numpy as np
X = np.arange(0,np.pi,0.1)
M = {"sin": [np.sin(x) for x in X],\
"cos": [np.cos(x) for x in X],\
"plo":[(x*x+x+1) for x in X],\
"trig": [np.cos(x) + np.sin(x) for x in X]}
print(M)
import pandas as pd
DF = pd.DataFrame(M)
DF
%matplotlib inline
DF.plot()
###Output
_____no_output_____ |
lab2/Part1_MNIST.ipynb | ###Markdown
Run in Google Colab Copyright Information
###Code
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
# Modified by Martin Keller-Ressel 2022.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
HUI NAHUI
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 19.4MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=c5e28807a40d465d00e86c5936c4201c3e50a22984dced2e5d3f6887366bf8ce
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
#'''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0837 - accuracy: 0.9766
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0746 - accuracy: 0.9791
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0673 - accuracy: 0.9813
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0608 - accuracy: 0.9835
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0522 - accuracy: 0.9852
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0782 - accuracy: 0.9771
Test accuracy: 0.9771000146865845
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, (3,3), activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(24, (3,3), activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
#'''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 24) 5208
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 24) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 600) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 76928
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 83,666
Trainable params: 83,666
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(tf.keras.optimizers.Adam(learning_rate=1e-3), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0386 - accuracy: 0.9883
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0217 - accuracy: 0.9934
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0126 - accuracy: 0.9961
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0104 - accuracy: 0.9970
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0079 - accuracy: 0.9975
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels) # TODO
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0782 - accuracy: 0.9771
Test accuracy: 0.9771000146865845
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0]) # TODO
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 33 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images) # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_weights) # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
# %tensorflow_version 2.x
import tensorflow as tf
# !pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation='relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - ETA: 8:12 - loss: 2.4209 - accuracy: 0.03 - ETA: 29s - loss: 1.8236 - accuracy: 0.4748 - ETA: 17s - loss: 1.4829 - accuracy: 0.610 - ETA: 11s - loss: 1.2191 - accuracy: 0.682 - ETA: 8s - loss: 1.0395 - accuracy: 0.729 - ETA: 7s - loss: 0.9206 - accuracy: 0.76 - ETA: 6s - loss: 0.8452 - accuracy: 0.78 - ETA: 5s - loss: 0.7855 - accuracy: 0.79 - ETA: 4s - loss: 0.7295 - accuracy: 0.81 - ETA: 4s - loss: 0.6951 - accuracy: 0.81 - ETA: 3s - loss: 0.6655 - accuracy: 0.82 - ETA: 3s - loss: 0.6377 - accuracy: 0.83 - ETA: 3s - loss: 0.6139 - accuracy: 0.83 - ETA: 3s - loss: 0.5915 - accuracy: 0.84 - ETA: 2s - loss: 0.5727 - accuracy: 0.84 - ETA: 2s - loss: 0.5568 - accuracy: 0.85 - ETA: 2s - loss: 0.5444 - accuracy: 0.85 - ETA: 2s - loss: 0.5328 - accuracy: 0.85 - ETA: 2s - loss: 0.5210 - accuracy: 0.85 - ETA: 2s - loss: 0.5112 - accuracy: 0.86 - ETA: 2s - loss: 0.5024 - accuracy: 0.86 - ETA: 1s - loss: 0.4918 - accuracy: 0.86 - ETA: 1s - loss: 0.4839 - accuracy: 0.86 - ETA: 1s - loss: 0.4741 - accuracy: 0.87 - ETA: 1s - loss: 0.4677 - accuracy: 0.87 - ETA: 1s - loss: 0.4619 - accuracy: 0.87 - ETA: 1s - loss: 0.4550 - accuracy: 0.87 - ETA: 1s - loss: 0.4497 - accuracy: 0.87 - ETA: 1s - loss: 0.4442 - accuracy: 0.87 - ETA: 1s - loss: 0.4382 - accuracy: 0.88 - ETA: 1s - loss: 0.4339 - accuracy: 0.88 - ETA: 1s - loss: 0.4281 - accuracy: 0.88 - ETA: 0s - loss: 0.4226 - accuracy: 0.88 - ETA: 0s - loss: 0.4184 - accuracy: 0.88 - ETA: 0s - loss: 0.4136 - accuracy: 0.88 - ETA: 0s - loss: 0.4103 - accuracy: 0.88 - ETA: 0s - loss: 0.4054 - accuracy: 0.88 - ETA: 0s - loss: 0.4025 - accuracy: 0.88 - ETA: 0s - loss: 0.3994 - accuracy: 0.88 - ETA: 0s - loss: 0.3960 - accuracy: 0.89 - ETA: 0s - loss: 0.3917 - accuracy: 0.89 - ETA: 0s - loss: 0.3881 - accuracy: 0.89 - ETA: 0s - loss: 0.3856 - accuracy: 0.89 - ETA: 0s - loss: 0.3823 - accuracy: 0.89 - ETA: 0s - loss: 0.3794 - accuracy: 0.89 - ETA: 0s - loss: 0.3753 - accuracy: 0.89 - ETA: 0s - loss: 0.3718 - accuracy: 0.89 - 3s 49us/sample - loss: 0.3699 - accuracy: 0.8979
Epoch 2/5
60000/60000 [==============================] - ETA: 2s - loss: 0.2789 - accuracy: 0.90 - ETA: 2s - loss: 0.2202 - accuracy: 0.93 - ETA: 2s - loss: 0.2127 - accuracy: 0.93 - ETA: 2s - loss: 0.2177 - accuracy: 0.93 - ETA: 2s - loss: 0.2238 - accuracy: 0.93 - ETA: 2s - loss: 0.2175 - accuracy: 0.93 - ETA: 2s - loss: 0.2131 - accuracy: 0.93 - ETA: 2s - loss: 0.2109 - accuracy: 0.93 - ETA: 1s - loss: 0.2144 - accuracy: 0.93 - ETA: 1s - loss: 0.2154 - accuracy: 0.93 - ETA: 1s - loss: 0.2143 - accuracy: 0.93 - ETA: 1s - loss: 0.2122 - accuracy: 0.93 - ETA: 1s - loss: 0.2093 - accuracy: 0.94 - ETA: 1s - loss: 0.2122 - accuracy: 0.93 - ETA: 1s - loss: 0.2110 - accuracy: 0.94 - ETA: 1s - loss: 0.2124 - accuracy: 0.93 - ETA: 1s - loss: 0.2121 - accuracy: 0.93 - ETA: 1s - loss: 0.2111 - accuracy: 0.93 - ETA: 1s - loss: 0.2088 - accuracy: 0.94 - ETA: 1s - loss: 0.2088 - accuracy: 0.94 - ETA: 1s - loss: 0.2074 - accuracy: 0.94 - ETA: 1s - loss: 0.2067 - accuracy: 0.94 - ETA: 1s - loss: 0.2066 - accuracy: 0.94 - ETA: 1s - loss: 0.2060 - accuracy: 0.94 - ETA: 1s - loss: 0.2064 - accuracy: 0.94 - ETA: 1s - loss: 0.2068 - accuracy: 0.94 - ETA: 1s - loss: 0.2063 - accuracy: 0.94 - ETA: 1s - loss: 0.2057 - accuracy: 0.94 - ETA: 0s - loss: 0.2047 - accuracy: 0.94 - ETA: 0s - loss: 0.2052 - accuracy: 0.94 - ETA: 0s - loss: 0.2040 - accuracy: 0.94 - ETA: 0s - loss: 0.2025 - accuracy: 0.94 - ETA: 0s - loss: 0.2023 - accuracy: 0.94 - ETA: 0s - loss: 0.2028 - accuracy: 0.94 - ETA: 0s - loss: 0.2024 - accuracy: 0.94 - ETA: 0s - loss: 0.2024 - accuracy: 0.94 - ETA: 0s - loss: 0.2019 - accuracy: 0.94 - ETA: 0s - loss: 0.2013 - accuracy: 0.94 - ETA: 0s - loss: 0.2010 - accuracy: 0.94 - ETA: 0s - loss: 0.2005 - accuracy: 0.94 - ETA: 0s - loss: 0.1997 - accuracy: 0.94 - ETA: 0s - loss: 0.1994 - accuracy: 0.94 - ETA: 0s - loss: 0.1992 - accuracy: 0.94 - ETA: 0s - loss: 0.1983 - accuracy: 0.94 - ETA: 0s - loss: 0.1973 - accuracy: 0.94 - ETA: 0s - loss: 0.1965 - accuracy: 0.94 - ETA: 0s - loss: 0.1959 - accuracy: 0.94 - 2s 40us/sample - loss: 0.1958 - accuracy: 0.9436
Epoch 3/5
60000/60000 [==============================] - ETA: 2s - loss: 0.2652 - accuracy: 0.90 - ETA: 2s - loss: 0.1619 - accuracy: 0.95 - ETA: 2s - loss: 0.1486 - accuracy: 0.96 - ETA: 2s - loss: 0.1544 - accuracy: 0.95 - ETA: 2s - loss: 0.1584 - accuracy: 0.95 - ETA: 2s - loss: 0.1618 - accuracy: 0.95 - ETA: 2s - loss: 0.1625 - accuracy: 0.95 - ETA: 2s - loss: 0.1610 - accuracy: 0.95 - ETA: 1s - loss: 0.1566 - accuracy: 0.95 - ETA: 1s - loss: 0.1541 - accuracy: 0.95 - ETA: 1s - loss: 0.1561 - accuracy: 0.95 - ETA: 1s - loss: 0.1544 - accuracy: 0.95 - ETA: 1s - loss: 0.1547 - accuracy: 0.95 - ETA: 1s - loss: 0.1525 - accuracy: 0.95 - ETA: 1s - loss: 0.1513 - accuracy: 0.95 - ETA: 1s - loss: 0.1525 - accuracy: 0.95 - ETA: 1s - loss: 0.1509 - accuracy: 0.95 - ETA: 1s - loss: 0.1498 - accuracy: 0.95 - ETA: 1s - loss: 0.1516 - accuracy: 0.95 - ETA: 1s - loss: 0.1512 - accuracy: 0.95 - ETA: 1s - loss: 0.1511 - accuracy: 0.95 - ETA: 1s - loss: 0.1507 - accuracy: 0.95 - ETA: 1s - loss: 0.1501 - accuracy: 0.95 - ETA: 1s - loss: 0.1501 - accuracy: 0.95 - ETA: 1s - loss: 0.1496 - accuracy: 0.95 - ETA: 1s - loss: 0.1502 - accuracy: 0.95 - ETA: 1s - loss: 0.1500 - accuracy: 0.95 - ETA: 1s - loss: 0.1492 - accuracy: 0.95 - ETA: 0s - loss: 0.1493 - accuracy: 0.95 - ETA: 0s - loss: 0.1500 - accuracy: 0.95 - ETA: 0s - loss: 0.1499 - accuracy: 0.95 - ETA: 0s - loss: 0.1497 - accuracy: 0.95 - ETA: 0s - loss: 0.1492 - accuracy: 0.95 - ETA: 0s - loss: 0.1482 - accuracy: 0.95 - ETA: 0s - loss: 0.1479 - accuracy: 0.95 - ETA: 0s - loss: 0.1477 - accuracy: 0.95 - ETA: 0s - loss: 0.1469 - accuracy: 0.95 - ETA: 0s - loss: 0.1457 - accuracy: 0.95 - ETA: 0s - loss: 0.1459 - accuracy: 0.95 - ETA: 0s - loss: 0.1457 - accuracy: 0.95 - ETA: 0s - loss: 0.1468 - accuracy: 0.95 - ETA: 0s - loss: 0.1467 - accuracy: 0.95 - ETA: 0s - loss: 0.1467 - accuracy: 0.95 - ETA: 0s - loss: 0.1464 - accuracy: 0.95 - ETA: 0s - loss: 0.1461 - accuracy: 0.95 - ETA: 0s - loss: 0.1458 - accuracy: 0.95 - ETA: 0s - loss: 0.1454 - accuracy: 0.95 - 2s 40us/sample - loss: 0.1451 - accuracy: 0.9590
Epoch 4/5
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
test_loss, test_acc = model.evaluate(x=test_images, y=test_labels, verbose=2)
print('Test accuracy:', test_acc)
###Output
10000/10000 - 1s - loss: 0.1047 - accuracy: 0.9698
Test accuracy: 0.9698
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3, 3), strides=(1,1 )),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3, 3), strides=(1, 1)),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) multiple 240
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 multiple 0
_________________________________________________________________
conv2d_3 (Conv2D) multiple 7812
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_2 (Flatten) multiple 0
_________________________________________________________________
dense_3 (Dense) multiple 115328
_________________________________________________________________
dense_4 (Dense) multiple 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
opt = tf.keras.optimizers.Adam(learning_rate=0.005)
cnn_model.compile(optimizer=opt,
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
BATCH_SIZE = 64
EPOCHS = 5
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - ETA: 12s - loss: 0.0323 - accuracy: 0.984 - ETA: 5s - loss: 0.1254 - accuracy: 0.985 - ETA: 4s - loss: 0.0938 - accuracy: 0.98 - ETA: 4s - loss: 0.0820 - accuracy: 0.98 - ETA: 4s - loss: 0.1142 - accuracy: 0.98 - ETA: 3s - loss: 0.1085 - accuracy: 0.98 - ETA: 3s - loss: 0.0975 - accuracy: 0.98 - ETA: 3s - loss: 0.0901 - accuracy: 0.98 - ETA: 3s - loss: 0.0882 - accuracy: 0.98 - ETA: 3s - loss: 0.0806 - accuracy: 0.98 - ETA: 3s - loss: 0.0742 - accuracy: 0.98 - ETA: 3s - loss: 0.0703 - accuracy: 0.99 - ETA: 3s - loss: 0.0692 - accuracy: 0.99 - ETA: 3s - loss: 0.0679 - accuracy: 0.99 - ETA: 3s - loss: 0.0633 - accuracy: 0.99 - ETA: 3s - loss: 0.0608 - accuracy: 0.99 - ETA: 3s - loss: 0.0639 - accuracy: 0.99 - ETA: 3s - loss: 0.0662 - accuracy: 0.99 - ETA: 3s - loss: 0.0635 - accuracy: 0.99 - ETA: 3s - loss: 0.0639 - accuracy: 0.99 - ETA: 2s - loss: 0.0621 - accuracy: 0.99 - ETA: 2s - loss: 0.0607 - accuracy: 0.99 - ETA: 2s - loss: 0.0594 - accuracy: 0.99 - ETA: 2s - loss: 0.0585 - accuracy: 0.99 - ETA: 2s - loss: 0.0570 - accuracy: 0.99 - ETA: 2s - loss: 0.0549 - accuracy: 0.99 - ETA: 2s - loss: 0.0541 - accuracy: 0.99 - ETA: 2s - loss: 0.0524 - accuracy: 0.99 - ETA: 2s - loss: 0.0521 - accuracy: 0.99 - ETA: 2s - loss: 0.0530 - accuracy: 0.99 - ETA: 2s - loss: 0.0527 - accuracy: 0.99 - ETA: 2s - loss: 0.0548 - accuracy: 0.99 - ETA: 2s - loss: 0.0547 - accuracy: 0.99 - ETA: 2s - loss: 0.0550 - accuracy: 0.99 - ETA: 2s - loss: 0.0559 - accuracy: 0.99 - ETA: 2s - loss: 0.0563 - accuracy: 0.99 - ETA: 2s - loss: 0.0549 - accuracy: 0.99 - ETA: 2s - loss: 0.0561 - accuracy: 0.99 - ETA: 2s - loss: 0.0572 - accuracy: 0.99 - ETA: 1s - loss: 0.0567 - accuracy: 0.99 - ETA: 1s - loss: 0.0556 - accuracy: 0.99 - ETA: 1s - loss: 0.0548 - accuracy: 0.99 - ETA: 1s - loss: 0.0540 - accuracy: 0.99 - ETA: 1s - loss: 0.0533 - accuracy: 0.99 - ETA: 1s - loss: 0.0526 - accuracy: 0.99 - ETA: 1s - loss: 0.0519 - accuracy: 0.99 - ETA: 1s - loss: 0.0515 - accuracy: 0.99 - ETA: 1s - loss: 0.0508 - accuracy: 0.99 - ETA: 1s - loss: 0.0507 - accuracy: 0.99 - ETA: 1s - loss: 0.0522 - accuracy: 0.99 - ETA: 1s - loss: 0.0518 - accuracy: 0.99 - ETA: 1s - loss: 0.0510 - accuracy: 0.99 - ETA: 1s - loss: 0.0501 - accuracy: 0.99 - ETA: 1s - loss: 0.0500 - accuracy: 0.99 - ETA: 1s - loss: 0.0507 - accuracy: 0.99 - ETA: 1s - loss: 0.0502 - accuracy: 0.99 - ETA: 1s - loss: 0.0504 - accuracy: 0.99 - ETA: 1s - loss: 0.0503 - accuracy: 0.99 - ETA: 1s - loss: 0.0503 - accuracy: 0.99 - ETA: 0s - loss: 0.0499 - accuracy: 0.99 - ETA: 0s - loss: 0.0500 - accuracy: 0.99 - ETA: 0s - loss: 0.0500 - accuracy: 0.99 - ETA: 0s - loss: 0.0496 - accuracy: 0.99 - ETA: 0s - loss: 0.0498 - accuracy: 0.99 - ETA: 0s - loss: 0.0496 - accuracy: 0.99 - ETA: 0s - loss: 0.0495 - accuracy: 0.99 - ETA: 0s - loss: 0.0496 - accuracy: 0.99 - ETA: 0s - loss: 0.0490 - accuracy: 0.99 - ETA: 0s - loss: 0.0500 - accuracy: 0.99 - ETA: 0s - loss: 0.0502 - accuracy: 0.99 - ETA: 0s - loss: 0.0499 - accuracy: 0.99 - ETA: 0s - loss: 0.0507 - accuracy: 0.99 - ETA: 0s - loss: 0.0509 - accuracy: 0.99 - ETA: 0s - loss: 0.0507 - accuracy: 0.99 - ETA: 0s - loss: 0.0510 - accuracy: 0.99 - ETA: 0s - loss: 0.0509 - accuracy: 0.99 - ETA: 0s - loss: 0.0502 - accuracy: 0.99 - ETA: 0s - loss: 0.0504 - accuracy: 0.99 - 4s 68us/sample - loss: 0.0507 - accuracy: 0.9924
Epoch 2/5
60000/60000 [==============================] - ETA: 4s - loss: 4.5461e-06 - accuracy: 1.00 - ETA: 4s - loss: 0.0267 - accuracy: 0.9952 - ETA: 4s - loss: 0.0598 - accuracy: 0.99 - ETA: 4s - loss: 0.0653 - accuracy: 0.99 - ETA: 4s - loss: 0.0721 - accuracy: 0.99 - ETA: 3s - loss: 0.0615 - accuracy: 0.99 - ETA: 3s - loss: 0.0657 - accuracy: 0.99 - ETA: 3s - loss: 0.0610 - accuracy: 0.99 - ETA: 3s - loss: 0.0571 - accuracy: 0.99 - ETA: 3s - loss: 0.0597 - accuracy: 0.99 - ETA: 3s - loss: 0.0574 - accuracy: 0.99 - ETA: 3s - loss: 0.0620 - accuracy: 0.99 - ETA: 3s - loss: 0.0609 - accuracy: 0.99 - ETA: 3s - loss: 0.0622 - accuracy: 0.99 - ETA: 3s - loss: 0.0638 - accuracy: 0.99 - ETA: 3s - loss: 0.0667 - accuracy: 0.99 - ETA: 3s - loss: 0.0651 - accuracy: 0.99 - ETA: 3s - loss: 0.0651 - accuracy: 0.99 - ETA: 3s - loss: 0.0630 - accuracy: 0.99 - ETA: 3s - loss: 0.0620 - accuracy: 0.99 - ETA: 3s - loss: 0.0607 - accuracy: 0.99 - ETA: 2s - loss: 0.0613 - accuracy: 0.99 - ETA: 2s - loss: 0.0619 - accuracy: 0.99 - ETA: 2s - loss: 0.0608 - accuracy: 0.99 - ETA: 2s - loss: 0.0587 - accuracy: 0.99 - ETA: 2s - loss: 0.0569 - accuracy: 0.99 - ETA: 2s - loss: 0.0554 - accuracy: 0.99 - ETA: 2s - loss: 0.0564 - accuracy: 0.99 - ETA: 2s - loss: 0.0575 - accuracy: 0.99 - ETA: 2s - loss: 0.0568 - accuracy: 0.99 - ETA: 2s - loss: 0.0560 - accuracy: 0.99 - ETA: 2s - loss: 0.0563 - accuracy: 0.99 - ETA: 2s - loss: 0.0559 - accuracy: 0.99 - ETA: 2s - loss: 0.0544 - accuracy: 0.99 - ETA: 2s - loss: 0.0541 - accuracy: 0.99 - ETA: 2s - loss: 0.0532 - accuracy: 0.99 - ETA: 2s - loss: 0.0533 - accuracy: 0.99 - ETA: 2s - loss: 0.0524 - accuracy: 0.99 - ETA: 1s - loss: 0.0519 - accuracy: 0.99 - ETA: 1s - loss: 0.0524 - accuracy: 0.99 - ETA: 1s - loss: 0.0522 - accuracy: 0.99 - ETA: 1s - loss: 0.0516 - accuracy: 0.99 - ETA: 1s - loss: 0.0523 - accuracy: 0.99 - ETA: 1s - loss: 0.0534 - accuracy: 0.99 - ETA: 1s - loss: 0.0531 - accuracy: 0.99 - ETA: 1s - loss: 0.0523 - accuracy: 0.99 - ETA: 1s - loss: 0.0543 - accuracy: 0.99 - ETA: 1s - loss: 0.0542 - accuracy: 0.99 - ETA: 1s - loss: 0.0571 - accuracy: 0.99 - ETA: 1s - loss: 0.0575 - accuracy: 0.99 - ETA: 1s - loss: 0.0609 - accuracy: 0.99 - ETA: 1s - loss: 0.0642 - accuracy: 0.99 - ETA: 1s - loss: 0.0651 - accuracy: 0.99 - ETA: 1s - loss: 0.0676 - accuracy: 0.99 - ETA: 1s - loss: 0.0696 - accuracy: 0.99 - ETA: 1s - loss: 0.0715 - accuracy: 0.99 - ETA: 0s - loss: 0.0728 - accuracy: 0.99 - ETA: 0s - loss: 0.0745 - accuracy: 0.99 - ETA: 0s - loss: 0.0742 - accuracy: 0.99 - ETA: 0s - loss: 0.0748 - accuracy: 0.99 - ETA: 0s - loss: 0.0762 - accuracy: 0.99 - ETA: 0s - loss: 0.0796 - accuracy: 0.99 - ETA: 0s - loss: 0.0828 - accuracy: 0.99 - ETA: 0s - loss: 0.0840 - accuracy: 0.99 - ETA: 0s - loss: 0.0845 - accuracy: 0.99 - ETA: 0s - loss: 0.0863 - accuracy: 0.99 - ETA: 0s - loss: 0.0858 - accuracy: 0.99 - ETA: 0s - loss: 0.0860 - accuracy: 0.99 - ETA: 0s - loss: 0.0865 - accuracy: 0.99 - ETA: 0s - loss: 0.0865 - accuracy: 0.99 - ETA: 0s - loss: 0.0863 - accuracy: 0.99 - ETA: 0s - loss: 0.0871 - accuracy: 0.99 - ETA: 0s - loss: 0.0879 - accuracy: 0.99 - ETA: 0s - loss: 0.0869 - accuracy: 0.99 - ETA: 0s - loss: 0.0876 - accuracy: 0.99 - 4s 65us/sample - loss: 0.0871 - accuracy: 0.9909
Epoch 3/5
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(x=test_images, y=test_labels, verbose=2)
print('Test accuracy:', test_acc)
###Output
10000/10000 - 1s - loss: 0.3903 - accuracy: 0.9825
Test accuracy: 0.9825
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions, axis=1)
print(prediction)
###Output
[7 2 1 ... 4 5 6]
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
# %tensorflow_version 2.x
import tensorflow as tf
# !pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices("GPU")) > 0
gpus = tf.config.experimental.list_physical_devices("GPU")
if gpus:
try:
# Currently, memory growth needs to be the same across GPUs
for gpu in gpus:
tf.config.experimental.set_memory_growth(gpu, True)
logical_gpus = tf.config.experimental.list_logical_devices("GPU")
print(len(gpus), "Physical GPUs,", len(logical_gpus), "Logical GPUs")
except RuntimeError as e:
# Memory growth must be set before GPUs have been initialized
print(e)
###Output
1 Physical GPUs, 1 Logical GPUs
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1) / 255.0).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1) / 255.0).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10, 10))
random_inds = np.random.choice(60000, 36)
for i in range(36):
plt.subplot(6, 6, i + 1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential(
[
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation="relu"),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation="softmax"),
]
)
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
"""TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?"""
model.compile(
optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss="sparse_categorical_crossentropy",
metrics=["accuracy"],
)
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
train_labels[:10]
# notice train labels are in raw; in this case use sparse categorial cross entropy
# , when the output is one-hot probability
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 1s 1ms/step - loss: 0.2958 - accuracy: 0.9164
Epoch 2/5
938/938 [==============================] - 1s 1ms/step - loss: 0.1356 - accuracy: 0.9603
Epoch 3/5
938/938 [==============================] - 1s 1ms/step - loss: 0.0945 - accuracy: 0.9722
Epoch 4/5
938/938 [==============================] - 1s 1ms/step - loss: 0.0716 - accuracy: 0.9790
Epoch 5/5
938/938 [==============================] - 1s 1ms/step - loss: 0.0567 - accuracy: 0.9832
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
"""TODO: Use the evaluate method to test the model!"""
test_loss, test_acc = model.evaluate(test_images, test_labels)
print("Test accuracy:", test_acc)
###Output
1/313 [..............................] - ETA: 0s - loss: 0.0131 - accuracy: 1.0000WARNING:tensorflow:Callbacks method `on_test_batch_end` is slow compared to the batch time (batch time: 0.0000s vs `on_test_batch_end` time: 0.0010s). Check your callbacks.
313/313 [==============================] - 0s 961us/step - loss: 0.0808 - accuracy: 0.9759
Test accuracy: 0.9758999943733215
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential(
[
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3, 3), activation="relu"),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3, 3), activation="relu"),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation="softmax"),
]
)
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 900) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 115328
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
"""TODO: Define the compile operation with your optimizer and learning rate of choice"""
cnn_model.compile(
optimizer="adam", loss="sparse_categorical_crossentropy", metrics=["accuracy"]
) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
"""TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used."""
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1927 - accuracy: 0.9416
Epoch 2/5
938/938 [==============================] - 1s 2ms/step - loss: 0.0574 - accuracy: 0.9816
Epoch 3/5
938/938 [==============================] - 1s 2ms/step - loss: 0.0380 - accuracy: 0.9883
Epoch 4/5
938/938 [==============================] - 1s 2ms/step - loss: 0.0288 - accuracy: 0.9906
Epoch 5/5
938/938 [==============================] - 1s 2ms/step - loss: 0.0220 - accuracy: 0.9931
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
"""TODO: Use the evaluate method to test the model!"""
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print("Test accuracy:", test_acc)
###Output
313/313 [==============================] - 0s 1ms/step - loss: 0.0399 - accuracy: 0.9868
Test accuracy: 0.9868000149726868
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
"""TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. """
prediction = tf.math.argmax(predictions, axis=1)
print(prediction)
###Output
tf.Tensor([7 2 1 ... 4 5 6], shape=(10000,), dtype=int64)
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0, :, :, 0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
# @title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 # @param {type:"slider", min:0, max:100, step:1}
plt.subplot(1, 2, 1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1, 2, 2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows * num_cols
plt.figure(figsize=(2 * 2 * num_cols, 2 * num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2 * num_cols, 2 * i + 1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2 * num_cols, 2 * i + 2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(
smoothing_factor=0.95
) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(
sec=2, xlabel="Iterations", ylabel="Loss", scale="semilogy"
)
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-3) # define our optimizer
if hasattr(tqdm, "_instances"):
tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (
train_images[idx : idx + batch_size],
train_labels[idx : idx + batch_size],
)
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images) # TODO
# this is bad; the outputs are a probability distribution, dont use from_logits
# only if output is linear activation, use from logits
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits)
loss_history.append(
loss_value.numpy().mean()
) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
"""TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters."""
grads = tape.gradient(loss_value, cnn_model.trainable_variables) # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
print("GitHub Copy")
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 21.9MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp36-none-any.whl size=2115443 sha256=df0a542026961a51ea928616268be92b310063185e5dd57b3841607c422ae946
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 2ms/step - loss: 0.5783 - accuracy: 0.8405
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2139 - accuracy: 0.9388
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1569 - accuracy: 0.9559
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1281 - accuracy: 0.9641
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1069 - accuracy: 0.9696
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1043 - accuracy: 0.9685
Test accuracy: 0.968500018119812
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, 3, activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, 3, activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_3 (Flatten) (None, 900) 0
_________________________________________________________________
dense_6 (Dense) (None, 128) 115328
_________________________________________________________________
dense_7 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-2), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels,batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 1.5165 - accuracy: 0.5091
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2605 - accuracy: 0.9214
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1737 - accuracy: 0.9486
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1276 - accuracy: 0.9611
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1066 - accuracy: 0.9669
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(x=train_images, y=train_labels)
print('Test accuracy:', test_acc)
###Output
1875/1875 [==============================] - 3s 2ms/step - loss: 0.0939 - accuracy: 0.9717
Test accuracy: 0.971666693687439
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = tf.math.argmax(predictions[0])
print(prediction)
###Output
tf.Tensor(7, shape=(), dtype=int64)
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 89 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 5.4MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=ba6653417f01557b3ff95f63fe0fb7b6fa4356112a869406e7fce52b2259df38
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10,activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 2ms/step - loss: 0.6001 - accuracy: 0.8318
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2083 - accuracy: 0.9411
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1554 - accuracy: 0.9559
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1233 - accuracy: 0.9655
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1009 - accuracy: 0.9714
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images,test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1015 - accuracy: 0.9704
Test accuracy: 0.9703999757766724
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24,3),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(24,3),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size= (2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 24) 5208
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 24) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 600) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 76928
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 83,666
Trainable params: 83,666
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 5ms/step - loss: 0.5107 - accuracy: 0.8403
Epoch 2/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0696 - accuracy: 0.9777
Epoch 3/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0462 - accuracy: 0.9859
Epoch 4/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0354 - accuracy: 0.9888
Epoch 5/5
938/938 [==============================] - 4s 4ms/step - loss: 0.0287 - accuracy: 0.9911
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images,test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1015 - accuracy: 0.9704
Test accuracy: 0.9703999757766724
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0], axis = 0)
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 63 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
Downloading mitdeeplearning-0.2.0.tar.gz (2.1 MB)
[K |████████████████████████████████| 2.1 MB 5.4 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.62.3)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-py3-none-any.whl size=2115442 sha256=952b91ac8700186ceb8d5ac4a3837b999aebfadff82ec1f649db538002da613f
Stored in directory: /root/.cache/pip/wheels/9a/b9/4f/99b7c8c5c75355550b83e1fcfc02956fb40c35eb01e2262877
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation= 'softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 3ms/step - loss: 0.3719 - accuracy: 0.8959
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.2006 - accuracy: 0.9435
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1508 - accuracy: 0.9569
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1221 - accuracy: 0.9648
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1022 - accuracy: 0.9719
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.1042 - accuracy: 0.9679
Test accuracy: 0.9678999781608582
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, 3),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(24, 3),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation= 'softmax'),
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
max_pooling2d (MaxPooling2D (None, 13, 13, 24) 0
)
conv2d_1 (Conv2D) (None, 11, 11, 24) 5208
max_pooling2d_1 (MaxPooling (None, 5, 5, 24) 0
2D)
flatten_2 (Flatten) (None, 600) 0
dense_4 (Dense) (None, 128) 76928
dense_5 (Dense) (None, 10) 1290
=================================================================
Total params: 83,666
Trainable params: 83,666
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels)
###Output
1875/1875 [==============================] - 9s 4ms/step - loss: 0.1729 - accuracy: 0.9461
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 4ms/step - loss: 0.0529 - accuracy: 0.9821
Test accuracy: 0.9821000099182129
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits)
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
# '''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
# Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
1.5 ConclusionIn this part of the lab, you had the chance to play with different MNIST classifiers with different architectures (fully-connected layers only, CNN), and experiment with how different hyperparameters affect accuracy (learning rate, etc.). The next part of the lab explores another application of CNNs, facial detection, and some drawbacks of AI systems in real world applications, like issues of bias.
###Code
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
#%tensorflow_version 2.x
import tensorflow as tf
#!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 2s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
# '''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 1s 1ms/step - loss: 0.3685 - accuracy: 0.8964
Epoch 2/5
938/938 [==============================] - 1s 1ms/step - loss: 0.2011 - accuracy: 0.9425
Epoch 3/5
938/938 [==============================] - 1s 1ms/step - loss: 0.1516 - accuracy: 0.9571
Epoch 4/5
938/938 [==============================] - 1s 1ms/step - loss: 0.1231 - accuracy: 0.9654
Epoch 5/5
938/938 [==============================] - 1s 1ms/step - loss: 0.1038 - accuracy: 0.9709
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels) # TODO
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 0s 1ms/step - loss: 0.1053 - accuracy: 0.9692
Test accuracy: 0.9692000150680542
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3, 3), activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3, 3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
# '''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 900) 0
_________________________________________________________________
dense_7 (Dense) (None, 128) 115328
_________________________________________________________________
dense_8 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-4), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0242 - accuracy: 0.9930
Epoch 2/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0170 - accuracy: 0.9948
Epoch 3/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0136 - accuracy: 0.9956
Epoch 4/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0115 - accuracy: 0.9963
Epoch 5/5
938/938 [==============================] - 4s 5ms/step - loss: 0.0099 - accuracy: 0.9967
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels) # TODO
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0528 - accuracy: 0.9878
Test accuracy: 0.9878000020980835
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0]) # TODO
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images) # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables) # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning_labs/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning_labs/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO''')
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO''')
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous ***Use data is this web site*** [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |▏ | 10kB 21.7MB/s eta 0:00:01
[K |▎ | 20kB 27.0MB/s eta 0:00:01
[K |▌ | 30kB 22.5MB/s eta 0:00:01
[K |▋ | 40kB 25.7MB/s eta 0:00:01
[K |▉ | 51kB 24.7MB/s eta 0:00:01
[K |█ | 61kB 27.2MB/s eta 0:00:01
[K |█ | 71kB 19.9MB/s eta 0:00:01
[K |█▎ | 81kB 21.0MB/s eta 0:00:01
[K |█▍ | 92kB 19.8MB/s eta 0:00:01
[K |█▋ | 102kB 20.3MB/s eta 0:00:01
[K |█▊ | 112kB 20.3MB/s eta 0:00:01
[K |█▉ | 122kB 20.3MB/s eta 0:00:01
[K |██ | 133kB 20.3MB/s eta 0:00:01
[K |██▏ | 143kB 20.3MB/s eta 0:00:01
[K |██▍ | 153kB 20.3MB/s eta 0:00:01
[K |██▌ | 163kB 20.3MB/s eta 0:00:01
[K |██▋ | 174kB 20.3MB/s eta 0:00:01
[K |██▉ | 184kB 20.3MB/s eta 0:00:01
[K |███ | 194kB 20.3MB/s eta 0:00:01
[K |███▏ | 204kB 20.3MB/s eta 0:00:01
[K |███▎ | 215kB 20.3MB/s eta 0:00:01
[K |███▌ | 225kB 20.3MB/s eta 0:00:01
[K |███▋ | 235kB 20.3MB/s eta 0:00:01
[K |███▊ | 245kB 20.3MB/s eta 0:00:01
[K |████ | 256kB 20.3MB/s eta 0:00:01
[K |████ | 266kB 20.3MB/s eta 0:00:01
[K |████▎ | 276kB 20.3MB/s eta 0:00:01
[K |████▍ | 286kB 20.3MB/s eta 0:00:01
[K |████▌ | 296kB 20.3MB/s eta 0:00:01
[K |████▊ | 307kB 20.3MB/s eta 0:00:01
[K |████▉ | 317kB 20.3MB/s eta 0:00:01
[K |█████ | 327kB 20.3MB/s eta 0:00:01
[K |█████▏ | 337kB 20.3MB/s eta 0:00:01
[K |█████▎ | 348kB 20.3MB/s eta 0:00:01
[K |█████▌ | 358kB 20.3MB/s eta 0:00:01
[K |█████▋ | 368kB 20.3MB/s eta 0:00:01
[K |█████▉ | 378kB 20.3MB/s eta 0:00:01
[K |██████ | 389kB 20.3MB/s eta 0:00:01
[K |██████▏ | 399kB 20.3MB/s eta 0:00:01
[K |██████▎ | 409kB 20.3MB/s eta 0:00:01
[K |██████▍ | 419kB 20.3MB/s eta 0:00:01
[K |██████▋ | 430kB 20.3MB/s eta 0:00:01
[K |██████▊ | 440kB 20.3MB/s eta 0:00:01
[K |███████ | 450kB 20.3MB/s eta 0:00:01
[K |███████ | 460kB 20.3MB/s eta 0:00:01
[K |███████▏ | 471kB 20.3MB/s eta 0:00:01
[K |███████▍ | 481kB 20.3MB/s eta 0:00:01
[K |███████▌ | 491kB 20.3MB/s eta 0:00:01
[K |███████▊ | 501kB 20.3MB/s eta 0:00:01
[K |███████▉ | 512kB 20.3MB/s eta 0:00:01
[K |████████ | 522kB 20.3MB/s eta 0:00:01
[K |████████▏ | 532kB 20.3MB/s eta 0:00:01
[K |████████▎ | 542kB 20.3MB/s eta 0:00:01
[K |████████▌ | 552kB 20.3MB/s eta 0:00:01
[K |████████▋ | 563kB 20.3MB/s eta 0:00:01
[K |████████▉ | 573kB 20.3MB/s eta 0:00:01
[K |█████████ | 583kB 20.3MB/s eta 0:00:01
[K |█████████ | 593kB 20.3MB/s eta 0:00:01
[K |█████████▎ | 604kB 20.3MB/s eta 0:00:01
[K |█████████▍ | 614kB 20.3MB/s eta 0:00:01
[K |█████████▋ | 624kB 20.3MB/s eta 0:00:01
[K |█████████▊ | 634kB 20.3MB/s eta 0:00:01
[K |█████████▉ | 645kB 20.3MB/s eta 0:00:01
[K |██████████ | 655kB 20.3MB/s eta 0:00:01
[K |██████████▏ | 665kB 20.3MB/s eta 0:00:01
[K |██████████▍ | 675kB 20.3MB/s eta 0:00:01
[K |██████████▌ | 686kB 20.3MB/s eta 0:00:01
[K |██████████▋ | 696kB 20.3MB/s eta 0:00:01
[K |██████████▉ | 706kB 20.3MB/s eta 0:00:01
[K |███████████ | 716kB 20.3MB/s eta 0:00:01
[K |███████████▏ | 727kB 20.3MB/s eta 0:00:01
[K |███████████▎ | 737kB 20.3MB/s eta 0:00:01
[K |███████████▍ | 747kB 20.3MB/s eta 0:00:01
[K |███████████▋ | 757kB 20.3MB/s eta 0:00:01
[K |███████████▊ | 768kB 20.3MB/s eta 0:00:01
[K |████████████ | 778kB 20.3MB/s eta 0:00:01
[K |████████████ | 788kB 20.3MB/s eta 0:00:01
[K |████████████▎ | 798kB 20.3MB/s eta 0:00:01
[K |████████████▍ | 808kB 20.3MB/s eta 0:00:01
[K |████████████▌ | 819kB 20.3MB/s eta 0:00:01
[K |████████████▊ | 829kB 20.3MB/s eta 0:00:01
[K |████████████▉ | 839kB 20.3MB/s eta 0:00:01
[K |█████████████ | 849kB 20.3MB/s eta 0:00:01
[K |█████████████▏ | 860kB 20.3MB/s eta 0:00:01
[K |█████████████▎ | 870kB 20.3MB/s eta 0:00:01
[K |█████████████▌ | 880kB 20.3MB/s eta 0:00:01
[K |█████████████▋ | 890kB 20.3MB/s eta 0:00:01
[K |█████████████▉ | 901kB 20.3MB/s eta 0:00:01
[K |██████████████ | 911kB 20.3MB/s eta 0:00:01
[K |██████████████ | 921kB 20.3MB/s eta 0:00:01
[K |██████████████▎ | 931kB 20.3MB/s eta 0:00:01
[K |██████████████▍ | 942kB 20.3MB/s eta 0:00:01
[K |██████████████▋ | 952kB 20.3MB/s eta 0:00:01
[K |██████████████▊ | 962kB 20.3MB/s eta 0:00:01
[K |███████████████ | 972kB 20.3MB/s eta 0:00:01
[K |███████████████ | 983kB 20.3MB/s eta 0:00:01
[K |███████████████▏ | 993kB 20.3MB/s eta 0:00:01
[K |███████████████▍ | 1.0MB 20.3MB/s eta 0:00:01
[K |███████████████▌ | 1.0MB 20.3MB/s eta 0:00:01
[K |███████████████▊ | 1.0MB 20.3MB/s eta 0:00:01
[K |███████████████▉ | 1.0MB 20.3MB/s eta 0:00:01
[K |████████████████ | 1.0MB 20.3MB/s eta 0:00:01
[K |████████████████▏ | 1.1MB 20.3MB/s eta 0:00:01
[K |████████████████▎ | 1.1MB 20.3MB/s eta 0:00:01
[K |████████████████▌ | 1.1MB 20.3MB/s eta 0:00:01
[K |████████████████▋ | 1.1MB 20.3MB/s eta 0:00:01
[K |████████████████▊ | 1.1MB 20.3MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 20.3MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 20.3MB/s eta 0:00:01
[K |█████████████████▎ | 1.1MB 20.3MB/s eta 0:00:01
[K |█████████████████▍ | 1.1MB 20.3MB/s eta 0:00:01
[K |█████████████████▋ | 1.1MB 20.3MB/s eta 0:00:01
[K |█████████████████▊ | 1.2MB 20.3MB/s eta 0:00:01
[K |█████████████████▉ | 1.2MB 20.3MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 20.3MB/s eta 0:00:01
[K |██████████████████▏ | 1.2MB 20.3MB/s eta 0:00:01
[K |██████████████████▍ | 1.2MB 20.3MB/s eta 0:00:01
[K |██████████████████▌ | 1.2MB 20.3MB/s eta 0:00:01
[K |██████████████████▋ | 1.2MB 20.3MB/s eta 0:00:01
[K |██████████████████▉ | 1.2MB 20.3MB/s eta 0:00:01
[K |███████████████████ | 1.2MB 20.3MB/s eta 0:00:01
[K |███████████████████▏ | 1.2MB 20.3MB/s eta 0:00:01
[K |███████████████████▎ | 1.3MB 20.3MB/s eta 0:00:01
[K |███████████████████▍ | 1.3MB 20.3MB/s eta 0:00:01
[K |███████████████████▋ | 1.3MB 20.3MB/s eta 0:00:01
[K |███████████████████▊ | 1.3MB 20.3MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 20.3MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 20.3MB/s eta 0:00:01
[K |████████████████████▎ | 1.3MB 20.3MB/s eta 0:00:01
[K |████████████████████▍ | 1.3MB 20.3MB/s eta 0:00:01
[K |████████████████████▌ | 1.3MB 20.3MB/s eta 0:00:01
[K |████████████████████▊ | 1.4MB 20.3MB/s eta 0:00:01
[K |████████████████████▉ | 1.4MB 20.3MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 20.3MB/s eta 0:00:01
[K |█████████████████████▏ | 1.4MB 20.3MB/s eta 0:00:01
[K |█████████████████████▎ | 1.4MB 20.3MB/s eta 0:00:01
[K |█████████████████████▌ | 1.4MB 20.3MB/s eta 0:00:01
[K |█████████████████████▋ | 1.4MB 20.3MB/s eta 0:00:01
[K |█████████████████████▉ | 1.4MB 20.3MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 20.3MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 20.3MB/s eta 0:00:01
[K |██████████████████████▎ | 1.5MB 20.3MB/s eta 0:00:01
[K |██████████████████████▍ | 1.5MB 20.3MB/s eta 0:00:01
[K |██████████████████████▋ | 1.5MB 20.3MB/s eta 0:00:01
[K |██████████████████████▊ | 1.5MB 20.3MB/s eta 0:00:01
[K |██████████████████████▉ | 1.5MB 20.3MB/s eta 0:00:01
[K |███████████████████████ | 1.5MB 20.3MB/s eta 0:00:01
[K |███████████████████████▏ | 1.5MB 20.3MB/s eta 0:00:01
[K |███████████████████████▍ | 1.5MB 20.3MB/s eta 0:00:01
[K |███████████████████████▌ | 1.5MB 20.3MB/s eta 0:00:01
[K |███████████████████████▊ | 1.5MB 20.3MB/s eta 0:00:01
[K |███████████████████████▉ | 1.6MB 20.3MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 20.3MB/s eta 0:00:01
[K |████████████████████████▏ | 1.6MB 20.3MB/s eta 0:00:01
[K |████████████████████████▎ | 1.6MB 20.3MB/s eta 0:00:01
[K |████████████████████████▌ | 1.6MB 20.3MB/s eta 0:00:01
[K |████████████████████████▋ | 1.6MB 20.3MB/s eta 0:00:01
[K |████████████████████████▊ | 1.6MB 20.3MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 20.3MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 20.3MB/s eta 0:00:01
[K |█████████████████████████▎ | 1.6MB 20.3MB/s eta 0:00:01
[K |█████████████████████████▍ | 1.7MB 20.3MB/s eta 0:00:01
[K |█████████████████████████▌ | 1.7MB 20.3MB/s eta 0:00:01
[K |█████████████████████████▊ | 1.7MB 20.3MB/s eta 0:00:01
[K |█████████████████████████▉ | 1.7MB 20.3MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 20.3MB/s eta 0:00:01
[K |██████████████████████████▏ | 1.7MB 20.3MB/s eta 0:00:01
[K |██████████████████████████▍ | 1.7MB 20.3MB/s eta 0:00:01
[K |██████████████████████████▌ | 1.7MB 20.3MB/s eta 0:00:01
[K |██████████████████████████▋ | 1.7MB 20.3MB/s eta 0:00:01
[K |██████████████████████████▉ | 1.8MB 20.3MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 20.3MB/s eta 0:00:01
[K |███████████████████████████▏ | 1.8MB 20.3MB/s eta 0:00:01
[K |███████████████████████████▎ | 1.8MB 20.3MB/s eta 0:00:01
[K |███████████████████████████▍ | 1.8MB 20.3MB/s eta 0:00:01
[K |███████████████████████████▋ | 1.8MB 20.3MB/s eta 0:00:01
[K |███████████████████████████▊ | 1.8MB 20.3MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 20.3MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 20.3MB/s eta 0:00:01
[K |████████████████████████████▏ | 1.8MB 20.3MB/s eta 0:00:01
[K |████████████████████████████▍ | 1.9MB 20.3MB/s eta 0:00:01
[K |████████████████████████████▌ | 1.9MB 20.3MB/s eta 0:00:01
[K |████████████████████████████▊ | 1.9MB 20.3MB/s eta 0:00:01
[K |████████████████████████████▉ | 1.9MB 20.3MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 20.3MB/s eta 0:00:01
[K |█████████████████████████████▏ | 1.9MB 20.3MB/s eta 0:00:01
[K |█████████████████████████████▎ | 1.9MB 20.3MB/s eta 0:00:01
[K |█████████████████████████████▌ | 1.9MB 20.3MB/s eta 0:00:01
[K |█████████████████████████████▋ | 1.9MB 20.3MB/s eta 0:00:01
[K |█████████████████████████████▉ | 1.9MB 20.3MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 20.3MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 20.3MB/s eta 0:00:01
[K |██████████████████████████████▎ | 2.0MB 20.3MB/s eta 0:00:01
[K |██████████████████████████████▍ | 2.0MB 20.3MB/s eta 0:00:01
[K |██████████████████████████████▋ | 2.0MB 20.3MB/s eta 0:00:01
[K |██████████████████████████████▊ | 2.0MB 20.3MB/s eta 0:00:01
[K |██████████████████████████████▉ | 2.0MB 20.3MB/s eta 0:00:01
[K |███████████████████████████████ | 2.0MB 20.3MB/s eta 0:00:01
[K |███████████████████████████████▏| 2.0MB 20.3MB/s eta 0:00:01
[K |███████████████████████████████▍| 2.0MB 20.3MB/s eta 0:00:01
[K |███████████████████████████████▌| 2.1MB 20.3MB/s eta 0:00:01
[K |███████████████████████████████▊| 2.1MB 20.3MB/s eta 0:00:01
[K |███████████████████████████████▉| 2.1MB 20.3MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 20.3MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 20.3MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=5dd9774f524a941806e3bbb116dc1b4d9311a34cccc328e58a4c883aeb641ba1
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:[Alt_text](https://github.com/Jagadambass/Intro-to-TensorFlow-Music-Generation/blob/main/lab2/img/mnist_2layers_arch.png ) Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# [TODO Dense layer to output classification probabilities]
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 2ms/step - loss: 0.5906 - accuracy: 0.8387
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2113 - accuracy: 0.9393
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1549 - accuracy: 0.9561
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1222 - accuracy: 0.9655
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1064 - accuracy: 0.9706
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels) # TODO
# test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1028 - accuracy: 0.9694
Test accuracy: 0.9693999886512756
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:[Alt_text 2](https://github.com/Jagadambass/Intro-to-TensorFlow-Music-Generation/blob/main/lab2/img/mnist_model.png) Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# tf.keras.layers.MaxPool2D('''TODO''')
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu),
# tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# tf.keras.layers.MaxPool2D('''TODO''')
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# [TODO Dense layer to output classification probabilities]
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 900) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 115328
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
# cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
# cnn_model.fit('''TODO''')
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.4458 - accuracy: 0.8681
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0615 - accuracy: 0.9813
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0378 - accuracy: 0.9887
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0281 - accuracy: 0.9917
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0223 - accuracy: 0.9930
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
# test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0346 - accuracy: 0.9890
Test accuracy: 0.9890000224113464
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
# prediction = # TODO
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
# logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits)
# loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
# grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 4.9MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.3)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.38.0)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.1)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.12.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=2fb8d25e5c19d8aceb1295b7f219dabc22bfb52175abb5f64d3cadb70c70c1f7
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation='relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.3696 - accuracy: 0.8982
Epoch 2/5
938/938 [==============================] - 2s 3ms/step - loss: 0.1973 - accuracy: 0.9435
Epoch 3/5
938/938 [==============================] - 2s 3ms/step - loss: 0.1471 - accuracy: 0.9580
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1188 - accuracy: 0.9661
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0995 - accuracy: 0.9724
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1021 - accuracy: 0.9701
Test accuracy: 0.9700999855995178
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, 3, activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D((2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, 3, activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple 0
_________________________________________________________________
conv2d_1 (Conv2D) multiple 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_1 (Flatten) multiple 0
_________________________________________________________________
dense_2 (Dense) multiple 115328
_________________________________________________________________
dense_3 (Dense) multiple 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=64, epochs=5)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1768 - accuracy: 0.9488
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0500 - accuracy: 0.9842
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0356 - accuracy: 0.9888
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0271 - accuracy: 0.9915
Epoch 5/5
938/938 [==============================] - 3s 4ms/step - loss: 0.0208 - accuracy: 0.9934
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0254 - accuracy: 0.9918
Test accuracy: 0.9918000102043152
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 99 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
print(loss_history.get()[-1])
###Output
0.11113820937167927
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |▏ | 10kB 24.2MB/s eta 0:00:01
[K |▎ | 20kB 30.4MB/s eta 0:00:01
[K |▌ | 30kB 25.6MB/s eta 0:00:01
[K |▋ | 40kB 29.2MB/s eta 0:00:01
[K |▉ | 51kB 27.0MB/s eta 0:00:01
[K |█ | 61kB 29.4MB/s eta 0:00:01
[K |█ | 71kB 19.3MB/s eta 0:00:01
[K |█▎ | 81kB 20.5MB/s eta 0:00:01
[K |█▍ | 92kB 19.1MB/s eta 0:00:01
[K |█▋ | 102kB 19.0MB/s eta 0:00:01
[K |█▊ | 112kB 19.0MB/s eta 0:00:01
[K |█▉ | 122kB 19.0MB/s eta 0:00:01
[K |██ | 133kB 19.0MB/s eta 0:00:01
[K |██▏ | 143kB 19.0MB/s eta 0:00:01
[K |██▍ | 153kB 19.0MB/s eta 0:00:01
[K |██▌ | 163kB 19.0MB/s eta 0:00:01
[K |██▋ | 174kB 19.0MB/s eta 0:00:01
[K |██▉ | 184kB 19.0MB/s eta 0:00:01
[K |███ | 194kB 19.0MB/s eta 0:00:01
[K |███▏ | 204kB 19.0MB/s eta 0:00:01
[K |███▎ | 215kB 19.0MB/s eta 0:00:01
[K |███▌ | 225kB 19.0MB/s eta 0:00:01
[K |███▋ | 235kB 19.0MB/s eta 0:00:01
[K |███▊ | 245kB 19.0MB/s eta 0:00:01
[K |████ | 256kB 19.0MB/s eta 0:00:01
[K |████ | 266kB 19.0MB/s eta 0:00:01
[K |████▎ | 276kB 19.0MB/s eta 0:00:01
[K |████▍ | 286kB 19.0MB/s eta 0:00:01
[K |████▌ | 296kB 19.0MB/s eta 0:00:01
[K |████▊ | 307kB 19.0MB/s eta 0:00:01
[K |████▉ | 317kB 19.0MB/s eta 0:00:01
[K |█████ | 327kB 19.0MB/s eta 0:00:01
[K |█████▏ | 337kB 19.0MB/s eta 0:00:01
[K |█████▎ | 348kB 19.0MB/s eta 0:00:01
[K |█████▌ | 358kB 19.0MB/s eta 0:00:01
[K |█████▋ | 368kB 19.0MB/s eta 0:00:01
[K |█████▉ | 378kB 19.0MB/s eta 0:00:01
[K |██████ | 389kB 19.0MB/s eta 0:00:01
[K |██████▏ | 399kB 19.0MB/s eta 0:00:01
[K |██████▎ | 409kB 19.0MB/s eta 0:00:01
[K |██████▍ | 419kB 19.0MB/s eta 0:00:01
[K |██████▋ | 430kB 19.0MB/s eta 0:00:01
[K |██████▊ | 440kB 19.0MB/s eta 0:00:01
[K |███████ | 450kB 19.0MB/s eta 0:00:01
[K |███████ | 460kB 19.0MB/s eta 0:00:01
[K |███████▏ | 471kB 19.0MB/s eta 0:00:01
[K |███████▍ | 481kB 19.0MB/s eta 0:00:01
[K |███████▌ | 491kB 19.0MB/s eta 0:00:01
[K |███████▊ | 501kB 19.0MB/s eta 0:00:01
[K |███████▉ | 512kB 19.0MB/s eta 0:00:01
[K |████████ | 522kB 19.0MB/s eta 0:00:01
[K |████████▏ | 532kB 19.0MB/s eta 0:00:01
[K |████████▎ | 542kB 19.0MB/s eta 0:00:01
[K |████████▌ | 552kB 19.0MB/s eta 0:00:01
[K |████████▋ | 563kB 19.0MB/s eta 0:00:01
[K |████████▉ | 573kB 19.0MB/s eta 0:00:01
[K |█████████ | 583kB 19.0MB/s eta 0:00:01
[K |█████████ | 593kB 19.0MB/s eta 0:00:01
[K |█████████▎ | 604kB 19.0MB/s eta 0:00:01
[K |█████████▍ | 614kB 19.0MB/s eta 0:00:01
[K |█████████▋ | 624kB 19.0MB/s eta 0:00:01
[K |█████████▊ | 634kB 19.0MB/s eta 0:00:01
[K |█████████▉ | 645kB 19.0MB/s eta 0:00:01
[K |██████████ | 655kB 19.0MB/s eta 0:00:01
[K |██████████▏ | 665kB 19.0MB/s eta 0:00:01
[K |██████████▍ | 675kB 19.0MB/s eta 0:00:01
[K |██████████▌ | 686kB 19.0MB/s eta 0:00:01
[K |██████████▋ | 696kB 19.0MB/s eta 0:00:01
[K |██████████▉ | 706kB 19.0MB/s eta 0:00:01
[K |███████████ | 716kB 19.0MB/s eta 0:00:01
[K |███████████▏ | 727kB 19.0MB/s eta 0:00:01
[K |███████████▎ | 737kB 19.0MB/s eta 0:00:01
[K |███████████▍ | 747kB 19.0MB/s eta 0:00:01
[K |███████████▋ | 757kB 19.0MB/s eta 0:00:01
[K |███████████▊ | 768kB 19.0MB/s eta 0:00:01
[K |████████████ | 778kB 19.0MB/s eta 0:00:01
[K |████████████ | 788kB 19.0MB/s eta 0:00:01
[K |████████████▎ | 798kB 19.0MB/s eta 0:00:01
[K |████████████▍ | 808kB 19.0MB/s eta 0:00:01
[K |████████████▌ | 819kB 19.0MB/s eta 0:00:01
[K |████████████▊ | 829kB 19.0MB/s eta 0:00:01
[K |████████████▉ | 839kB 19.0MB/s eta 0:00:01
[K |█████████████ | 849kB 19.0MB/s eta 0:00:01
[K |█████████████▏ | 860kB 19.0MB/s eta 0:00:01
[K |█████████████▎ | 870kB 19.0MB/s eta 0:00:01
[K |█████████████▌ | 880kB 19.0MB/s eta 0:00:01
[K |█████████████▋ | 890kB 19.0MB/s eta 0:00:01
[K |█████████████▉ | 901kB 19.0MB/s eta 0:00:01
[K |██████████████ | 911kB 19.0MB/s eta 0:00:01
[K |██████████████ | 921kB 19.0MB/s eta 0:00:01
[K |██████████████▎ | 931kB 19.0MB/s eta 0:00:01
[K |██████████████▍ | 942kB 19.0MB/s eta 0:00:01
[K |██████████████▋ | 952kB 19.0MB/s eta 0:00:01
[K |██████████████▊ | 962kB 19.0MB/s eta 0:00:01
[K |███████████████ | 972kB 19.0MB/s eta 0:00:01
[K |███████████████ | 983kB 19.0MB/s eta 0:00:01
[K |███████████████▏ | 993kB 19.0MB/s eta 0:00:01
[K |███████████████▍ | 1.0MB 19.0MB/s eta 0:00:01
[K |███████████████▌ | 1.0MB 19.0MB/s eta 0:00:01
[K |███████████████▊ | 1.0MB 19.0MB/s eta 0:00:01
[K |███████████████▉ | 1.0MB 19.0MB/s eta 0:00:01
[K |████████████████ | 1.0MB 19.0MB/s eta 0:00:01
[K |████████████████▏ | 1.1MB 19.0MB/s eta 0:00:01
[K |████████████████▎ | 1.1MB 19.0MB/s eta 0:00:01
[K |████████████████▌ | 1.1MB 19.0MB/s eta 0:00:01
[K |████████████████▋ | 1.1MB 19.0MB/s eta 0:00:01
[K |████████████████▊ | 1.1MB 19.0MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 19.0MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 19.0MB/s eta 0:00:01
[K |█████████████████▎ | 1.1MB 19.0MB/s eta 0:00:01
[K |█████████████████▍ | 1.1MB 19.0MB/s eta 0:00:01
[K |█████████████████▋ | 1.1MB 19.0MB/s eta 0:00:01
[K |█████████████████▊ | 1.2MB 19.0MB/s eta 0:00:01
[K |█████████████████▉ | 1.2MB 19.0MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 19.0MB/s eta 0:00:01
[K |██████████████████▏ | 1.2MB 19.0MB/s eta 0:00:01
[K |██████████████████▍ | 1.2MB 19.0MB/s eta 0:00:01
[K |██████████████████▌ | 1.2MB 19.0MB/s eta 0:00:01
[K |██████████████████▋ | 1.2MB 19.0MB/s eta 0:00:01
[K |██████████████████▉ | 1.2MB 19.0MB/s eta 0:00:01
[K |███████████████████ | 1.2MB 19.0MB/s eta 0:00:01
[K |███████████████████▏ | 1.2MB 19.0MB/s eta 0:00:01
[K |███████████████████▎ | 1.3MB 19.0MB/s eta 0:00:01
[K |███████████████████▍ | 1.3MB 19.0MB/s eta 0:00:01
[K |███████████████████▋ | 1.3MB 19.0MB/s eta 0:00:01
[K |███████████████████▊ | 1.3MB 19.0MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 19.0MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 19.0MB/s eta 0:00:01
[K |████████████████████▎ | 1.3MB 19.0MB/s eta 0:00:01
[K |████████████████████▍ | 1.3MB 19.0MB/s eta 0:00:01
[K |████████████████████▌ | 1.3MB 19.0MB/s eta 0:00:01
[K |████████████████████▊ | 1.4MB 19.0MB/s eta 0:00:01
[K |████████████████████▉ | 1.4MB 19.0MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 19.0MB/s eta 0:00:01
[K |█████████████████████▏ | 1.4MB 19.0MB/s eta 0:00:01
[K |█████████████████████▎ | 1.4MB 19.0MB/s eta 0:00:01
[K |█████████████████████▌ | 1.4MB 19.0MB/s eta 0:00:01
[K |█████████████████████▋ | 1.4MB 19.0MB/s eta 0:00:01
[K |█████████████████████▉ | 1.4MB 19.0MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 19.0MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 19.0MB/s eta 0:00:01
[K |██████████████████████▎ | 1.5MB 19.0MB/s eta 0:00:01
[K |██████████████████████▍ | 1.5MB 19.0MB/s eta 0:00:01
[K |██████████████████████▋ | 1.5MB 19.0MB/s eta 0:00:01
[K |██████████████████████▊ | 1.5MB 19.0MB/s eta 0:00:01
[K |██████████████████████▉ | 1.5MB 19.0MB/s eta 0:00:01
[K |███████████████████████ | 1.5MB 19.0MB/s eta 0:00:01
[K |███████████████████████▏ | 1.5MB 19.0MB/s eta 0:00:01
[K |███████████████████████▍ | 1.5MB 19.0MB/s eta 0:00:01
[K |███████████████████████▌ | 1.5MB 19.0MB/s eta 0:00:01
[K |███████████████████████▊ | 1.5MB 19.0MB/s eta 0:00:01
[K |███████████████████████▉ | 1.6MB 19.0MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 19.0MB/s eta 0:00:01
[K |████████████████████████▏ | 1.6MB 19.0MB/s eta 0:00:01
[K |████████████████████████▎ | 1.6MB 19.0MB/s eta 0:00:01
[K |████████████████████████▌ | 1.6MB 19.0MB/s eta 0:00:01
[K |████████████████████████▋ | 1.6MB 19.0MB/s eta 0:00:01
[K |████████████████████████▊ | 1.6MB 19.0MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 19.0MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 19.0MB/s eta 0:00:01
[K |█████████████████████████▎ | 1.6MB 19.0MB/s eta 0:00:01
[K |█████████████████████████▍ | 1.7MB 19.0MB/s eta 0:00:01
[K |█████████████████████████▌ | 1.7MB 19.0MB/s eta 0:00:01
[K |█████████████████████████▊ | 1.7MB 19.0MB/s eta 0:00:01
[K |█████████████████████████▉ | 1.7MB 19.0MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 19.0MB/s eta 0:00:01
[K |██████████████████████████▏ | 1.7MB 19.0MB/s eta 0:00:01
[K |██████████████████████████▍ | 1.7MB 19.0MB/s eta 0:00:01
[K |██████████████████████████▌ | 1.7MB 19.0MB/s eta 0:00:01
[K |██████████████████████████▋ | 1.7MB 19.0MB/s eta 0:00:01
[K |██████████████████████████▉ | 1.8MB 19.0MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 19.0MB/s eta 0:00:01
[K |███████████████████████████▏ | 1.8MB 19.0MB/s eta 0:00:01
[K |███████████████████████████▎ | 1.8MB 19.0MB/s eta 0:00:01
[K |███████████████████████████▍ | 1.8MB 19.0MB/s eta 0:00:01
[K |███████████████████████████▋ | 1.8MB 19.0MB/s eta 0:00:01
[K |███████████████████████████▊ | 1.8MB 19.0MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 19.0MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 19.0MB/s eta 0:00:01
[K |████████████████████████████▏ | 1.8MB 19.0MB/s eta 0:00:01
[K |████████████████████████████▍ | 1.9MB 19.0MB/s eta 0:00:01
[K |████████████████████████████▌ | 1.9MB 19.0MB/s eta 0:00:01
[K |████████████████████████████▊ | 1.9MB 19.0MB/s eta 0:00:01
[K |████████████████████████████▉ | 1.9MB 19.0MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 19.0MB/s eta 0:00:01
[K |█████████████████████████████▏ | 1.9MB 19.0MB/s eta 0:00:01
[K |█████████████████████████████▎ | 1.9MB 19.0MB/s eta 0:00:01
[K |█████████████████████████████▌ | 1.9MB 19.0MB/s eta 0:00:01
[K |█████████████████████████████▋ | 1.9MB 19.0MB/s eta 0:00:01
[K |█████████████████████████████▉ | 1.9MB 19.0MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 19.0MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 19.0MB/s eta 0:00:01
[K |██████████████████████████████▎ | 2.0MB 19.0MB/s eta 0:00:01
[K |██████████████████████████████▍ | 2.0MB 19.0MB/s eta 0:00:01
[K |██████████████████████████████▋ | 2.0MB 19.0MB/s eta 0:00:01
[K |██████████████████████████████▊ | 2.0MB 19.0MB/s eta 0:00:01
[K |██████████████████████████████▉ | 2.0MB 19.0MB/s eta 0:00:01
[K |███████████████████████████████ | 2.0MB 19.0MB/s eta 0:00:01
[K |███████████████████████████████▏| 2.0MB 19.0MB/s eta 0:00:01
[K |███████████████████████████████▍| 2.0MB 19.0MB/s eta 0:00:01
[K |███████████████████████████████▌| 2.1MB 19.0MB/s eta 0:00:01
[K |███████████████████████████████▊| 2.1MB 19.0MB/s eta 0:00:01
[K |███████████████████████████████▉| 2.1MB 19.0MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 19.0MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 19.0MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=cfa59a6e21e0a4f684315668eeb5a8f9ba442b8322081586eed21b9b1e39cc9d
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer="Adam",
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.9742 - accuracy: 0.6483
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0344 - accuracy: 0.9898
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0238 - accuracy: 0.9932
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0176 - accuracy: 0.9952
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0160 - accuracy: 0.9956
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0742 - accuracy: 0.9805
Test accuracy: 0.9804999828338623
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, 3, activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(2),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, 3, activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_10 (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 5, 5, 36) 0
_________________________________________________________________
flatten_6 (Flatten) (None, 900) 0
_________________________________________________________________
dense_12 (Dense) (None, 128) 115328
_________________________________________________________________
dense_13 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.4002 - accuracy: 0.8868
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0524 - accuracy: 0.9839
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0342 - accuracy: 0.9892
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0246 - accuracy: 0.9923
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0192 - accuracy: 0.9942
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0270 - accuracy: 0.9909
Test accuracy: 0.9908999800682068
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 100 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 8.7MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=5e360b6e574dfb253b60d711107e4b6c7e2f3c22f239f8f4dd30d6c8738834a6
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
print(train_images.shape)
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
print(train_images.shape)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
(60000, 28, 28)
(60000, 28, 28, 1)
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation="relu"),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
# '''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation="softmax"),
])
return fc_model
model = build_fc_model()
input_shape = (4, 28, 28, 3)
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
tf.keras.utils.plot_model(model, "my_first_model.png",show_shapes=True)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
test_loss, test_acc = model.evaluate(x=test_images,y=test_labels,batch_size=BATCH_SIZE)# TODO
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 1s 2ms/step - loss: 0.1113 - accuracy: 0.9679
Test accuracy: 0.9678999781608582
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24,3,input_shape=train_images.shape[1:]),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36,3),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
#'''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation="softmax"),
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 900) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 115328
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 4s 3ms/step - loss: 0.2026 - accuracy: 0.9423
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0657 - accuracy: 0.9801
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0455 - accuracy: 0.9863
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0339 - accuracy: 0.9896
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0263 - accuracy: 0.9922
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
# '''TODO: Use the evaluate method to test the model!'''
# test_loss, test_acc = # TODO
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
test_loss, test_acc = cnn_model.evaluate(x=test_images,y=test_labels,batch_size=BATCH_SIZE)# TODO
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 1s 2ms/step - loss: 0.0354 - accuracy: 0.9882
Test accuracy: 0.9882000088691711
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
type(predictions)
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = predictions.argmax(axis=1)# TODO
print(prediction)
###Output
[7 2 1 ... 4 5 6]
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 31 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images,training=True)# TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels,logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables) # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Requirement already satisfied: mitdeeplearning in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (0.2.0)
Requirement already satisfied: numpy in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from mitdeeplearning) (1.19.1)
Requirement already satisfied: gym in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from mitdeeplearning) (0.18.0)
Requirement already satisfied: regex in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from mitdeeplearning) (2021.4.4)
Requirement already satisfied: tqdm in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from mitdeeplearning) (4.60.0)
Requirement already satisfied: Pillow<=7.2.0 in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from gym->mitdeeplearning) (7.2.0)
Requirement already satisfied: scipy in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from gym->mitdeeplearning) (1.5.2)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from gym->mitdeeplearning) (1.6.0)
Requirement already satisfied: future in c:\users\dell\.conda\envs\tensorflow\lib\site-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.18.2)
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 9s 1us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# [TODO Dense layer to output classification probabilities]
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 67us/sample - loss: 0.3751 - accuracy: 0.8956
Epoch 2/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.2010 - accuracy: 0.9431
Epoch 3/5
60000/60000 [==============================] - 3s 45us/sample - loss: 0.1499 - accuracy: 0.9572
Epoch 4/5
60000/60000 [==============================] - 3s 47us/sample - loss: 0.1208 - accuracy: 0.9658
Epoch 5/5
60000/60000 [==============================] - 3s 48us/sample - loss: 0.1024 - accuracy: 0.9707
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels) # TODO
# test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
10000/1 [================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 1s 63us/sample - loss: 0.0550 - accuracy: 0.9699
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple 0
_________________________________________________________________
conv2d_1 (Conv2D) multiple 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_2 (Flatten) multiple 0
_________________________________________________________________
dense_4 (Dense) multiple 115328
_________________________________________________________________
dense_5 (Dense) multiple 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 34s 572us/sample - loss: 0.1722 - accuracy: 0.9489
Epoch 2/5
60000/60000 [==============================] - 36s 605us/sample - loss: 0.0537 - accuracy: 0.9834
Epoch 3/5
60000/60000 [==============================] - 37s 621us/sample - loss: 0.0386 - accuracy: 0.9880
Epoch 4/5
60000/60000 [==============================] - 34s 567us/sample - loss: 0.0294 - accuracy: 0.9908
Epoch 5/5
60000/60000 [==============================] - 29s 476us/sample - loss: 0.0224 - accuracy: 0.9924
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
10000/1 [================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================================] - 2s 191us/sample - loss: 0.0169 - accuracy: 0.9899
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
# logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits)
# loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
# grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |▏ | 10kB 22.6MB/s eta 0:00:01
[K |▎ | 20kB 29.7MB/s eta 0:00:01
[K |▌ | 30kB 24.9MB/s eta 0:00:01
[K |▋ | 40kB 21.2MB/s eta 0:00:01
[K |▉ | 51kB 22.5MB/s eta 0:00:01
[K |█ | 61kB 15.5MB/s eta 0:00:01
[K |█ | 71kB 16.9MB/s eta 0:00:01
[K |█▎ | 81kB 15.9MB/s eta 0:00:01
[K |█▍ | 92kB 15.9MB/s eta 0:00:01
[K |█▋ | 102kB 16.5MB/s eta 0:00:01
[K |█▊ | 112kB 16.5MB/s eta 0:00:01
[K |█▉ | 122kB 16.5MB/s eta 0:00:01
[K |██ | 133kB 16.5MB/s eta 0:00:01
[K |██▏ | 143kB 16.5MB/s eta 0:00:01
[K |██▍ | 153kB 16.5MB/s eta 0:00:01
[K |██▌ | 163kB 16.5MB/s eta 0:00:01
[K |██▋ | 174kB 16.5MB/s eta 0:00:01
[K |██▉ | 184kB 16.5MB/s eta 0:00:01
[K |███ | 194kB 16.5MB/s eta 0:00:01
[K |███▏ | 204kB 16.5MB/s eta 0:00:01
[K |███▎ | 215kB 16.5MB/s eta 0:00:01
[K |███▌ | 225kB 16.5MB/s eta 0:00:01
[K |███▋ | 235kB 16.5MB/s eta 0:00:01
[K |███▊ | 245kB 16.5MB/s eta 0:00:01
[K |████ | 256kB 16.5MB/s eta 0:00:01
[K |████ | 266kB 16.5MB/s eta 0:00:01
[K |████▎ | 276kB 16.5MB/s eta 0:00:01
[K |████▍ | 286kB 16.5MB/s eta 0:00:01
[K |████▌ | 296kB 16.5MB/s eta 0:00:01
[K |████▊ | 307kB 16.5MB/s eta 0:00:01
[K |████▉ | 317kB 16.5MB/s eta 0:00:01
[K |█████ | 327kB 16.5MB/s eta 0:00:01
[K |█████▏ | 337kB 16.5MB/s eta 0:00:01
[K |█████▎ | 348kB 16.5MB/s eta 0:00:01
[K |█████▌ | 358kB 16.5MB/s eta 0:00:01
[K |█████▋ | 368kB 16.5MB/s eta 0:00:01
[K |█████▉ | 378kB 16.5MB/s eta 0:00:01
[K |██████ | 389kB 16.5MB/s eta 0:00:01
[K |██████▏ | 399kB 16.5MB/s eta 0:00:01
[K |██████▎ | 409kB 16.5MB/s eta 0:00:01
[K |██████▍ | 419kB 16.5MB/s eta 0:00:01
[K |██████▋ | 430kB 16.5MB/s eta 0:00:01
[K |██████▊ | 440kB 16.5MB/s eta 0:00:01
[K |███████ | 450kB 16.5MB/s eta 0:00:01
[K |███████ | 460kB 16.5MB/s eta 0:00:01
[K |███████▏ | 471kB 16.5MB/s eta 0:00:01
[K |███████▍ | 481kB 16.5MB/s eta 0:00:01
[K |███████▌ | 491kB 16.5MB/s eta 0:00:01
[K |███████▊ | 501kB 16.5MB/s eta 0:00:01
[K |███████▉ | 512kB 16.5MB/s eta 0:00:01
[K |████████ | 522kB 16.5MB/s eta 0:00:01
[K |████████▏ | 532kB 16.5MB/s eta 0:00:01
[K |████████▎ | 542kB 16.5MB/s eta 0:00:01
[K |████████▌ | 552kB 16.5MB/s eta 0:00:01
[K |████████▋ | 563kB 16.5MB/s eta 0:00:01
[K |████████▉ | 573kB 16.5MB/s eta 0:00:01
[K |█████████ | 583kB 16.5MB/s eta 0:00:01
[K |█████████ | 593kB 16.5MB/s eta 0:00:01
[K |█████████▎ | 604kB 16.5MB/s eta 0:00:01
[K |█████████▍ | 614kB 16.5MB/s eta 0:00:01
[K |█████████▋ | 624kB 16.5MB/s eta 0:00:01
[K |█████████▊ | 634kB 16.5MB/s eta 0:00:01
[K |█████████▉ | 645kB 16.5MB/s eta 0:00:01
[K |██████████ | 655kB 16.5MB/s eta 0:00:01
[K |██████████▏ | 665kB 16.5MB/s eta 0:00:01
[K |██████████▍ | 675kB 16.5MB/s eta 0:00:01
[K |██████████▌ | 686kB 16.5MB/s eta 0:00:01
[K |██████████▋ | 696kB 16.5MB/s eta 0:00:01
[K |██████████▉ | 706kB 16.5MB/s eta 0:00:01
[K |███████████ | 716kB 16.5MB/s eta 0:00:01
[K |███████████▏ | 727kB 16.5MB/s eta 0:00:01
[K |███████████▎ | 737kB 16.5MB/s eta 0:00:01
[K |███████████▍ | 747kB 16.5MB/s eta 0:00:01
[K |███████████▋ | 757kB 16.5MB/s eta 0:00:01
[K |███████████▊ | 768kB 16.5MB/s eta 0:00:01
[K |████████████ | 778kB 16.5MB/s eta 0:00:01
[K |████████████ | 788kB 16.5MB/s eta 0:00:01
[K |████████████▎ | 798kB 16.5MB/s eta 0:00:01
[K |████████████▍ | 808kB 16.5MB/s eta 0:00:01
[K |████████████▌ | 819kB 16.5MB/s eta 0:00:01
[K |████████████▊ | 829kB 16.5MB/s eta 0:00:01
[K |████████████▉ | 839kB 16.5MB/s eta 0:00:01
[K |█████████████ | 849kB 16.5MB/s eta 0:00:01
[K |█████████████▏ | 860kB 16.5MB/s eta 0:00:01
[K |█████████████▎ | 870kB 16.5MB/s eta 0:00:01
[K |█████████████▌ | 880kB 16.5MB/s eta 0:00:01
[K |█████████████▋ | 890kB 16.5MB/s eta 0:00:01
[K |█████████████▉ | 901kB 16.5MB/s eta 0:00:01
[K |██████████████ | 911kB 16.5MB/s eta 0:00:01
[K |██████████████ | 921kB 16.5MB/s eta 0:00:01
[K |██████████████▎ | 931kB 16.5MB/s eta 0:00:01
[K |██████████████▍ | 942kB 16.5MB/s eta 0:00:01
[K |██████████████▋ | 952kB 16.5MB/s eta 0:00:01
[K |██████████████▊ | 962kB 16.5MB/s eta 0:00:01
[K |███████████████ | 972kB 16.5MB/s eta 0:00:01
[K |███████████████ | 983kB 16.5MB/s eta 0:00:01
[K |███████████████▏ | 993kB 16.5MB/s eta 0:00:01
[K |███████████████▍ | 1.0MB 16.5MB/s eta 0:00:01
[K |███████████████▌ | 1.0MB 16.5MB/s eta 0:00:01
[K |███████████████▊ | 1.0MB 16.5MB/s eta 0:00:01
[K |███████████████▉ | 1.0MB 16.5MB/s eta 0:00:01
[K |████████████████ | 1.0MB 16.5MB/s eta 0:00:01
[K |████████████████▏ | 1.1MB 16.5MB/s eta 0:00:01
[K |████████████████▎ | 1.1MB 16.5MB/s eta 0:00:01
[K |████████████████▌ | 1.1MB 16.5MB/s eta 0:00:01
[K |████████████████▋ | 1.1MB 16.5MB/s eta 0:00:01
[K |████████████████▊ | 1.1MB 16.5MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 16.5MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 16.5MB/s eta 0:00:01
[K |█████████████████▎ | 1.1MB 16.5MB/s eta 0:00:01
[K |█████████████████▍ | 1.1MB 16.5MB/s eta 0:00:01
[K |█████████████████▋ | 1.1MB 16.5MB/s eta 0:00:01
[K |█████████████████▊ | 1.2MB 16.5MB/s eta 0:00:01
[K |█████████████████▉ | 1.2MB 16.5MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 16.5MB/s eta 0:00:01
[K |██████████████████▏ | 1.2MB 16.5MB/s eta 0:00:01
[K |██████████████████▍ | 1.2MB 16.5MB/s eta 0:00:01
[K |██████████████████▌ | 1.2MB 16.5MB/s eta 0:00:01
[K |██████████████████▋ | 1.2MB 16.5MB/s eta 0:00:01
[K |██████████████████▉ | 1.2MB 16.5MB/s eta 0:00:01
[K |███████████████████ | 1.2MB 16.5MB/s eta 0:00:01
[K |███████████████████▏ | 1.2MB 16.5MB/s eta 0:00:01
[K |███████████████████▎ | 1.3MB 16.5MB/s eta 0:00:01
[K |███████████████████▍ | 1.3MB 16.5MB/s eta 0:00:01
[K |███████████████████▋ | 1.3MB 16.5MB/s eta 0:00:01
[K |███████████████████▊ | 1.3MB 16.5MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 16.5MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 16.5MB/s eta 0:00:01
[K |████████████████████▎ | 1.3MB 16.5MB/s eta 0:00:01
[K |████████████████████▍ | 1.3MB 16.5MB/s eta 0:00:01
[K |████████████████████▌ | 1.3MB 16.5MB/s eta 0:00:01
[K |████████████████████▊ | 1.4MB 16.5MB/s eta 0:00:01
[K |████████████████████▉ | 1.4MB 16.5MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 16.5MB/s eta 0:00:01
[K |█████████████████████▏ | 1.4MB 16.5MB/s eta 0:00:01
[K |█████████████████████▎ | 1.4MB 16.5MB/s eta 0:00:01
[K |█████████████████████▌ | 1.4MB 16.5MB/s eta 0:00:01
[K |█████████████████████▋ | 1.4MB 16.5MB/s eta 0:00:01
[K |█████████████████████▉ | 1.4MB 16.5MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 16.5MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 16.5MB/s eta 0:00:01
[K |██████████████████████▎ | 1.5MB 16.5MB/s eta 0:00:01
[K |██████████████████████▍ | 1.5MB 16.5MB/s eta 0:00:01
[K |██████████████████████▋ | 1.5MB 16.5MB/s eta 0:00:01
[K |██████████████████████▊ | 1.5MB 16.5MB/s eta 0:00:01
[K |██████████████████████▉ | 1.5MB 16.5MB/s eta 0:00:01
[K |███████████████████████ | 1.5MB 16.5MB/s eta 0:00:01
[K |███████████████████████▏ | 1.5MB 16.5MB/s eta 0:00:01
[K |███████████████████████▍ | 1.5MB 16.5MB/s eta 0:00:01
[K |███████████████████████▌ | 1.5MB 16.5MB/s eta 0:00:01
[K |███████████████████████▊ | 1.5MB 16.5MB/s eta 0:00:01
[K |███████████████████████▉ | 1.6MB 16.5MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 16.5MB/s eta 0:00:01
[K |████████████████████████▏ | 1.6MB 16.5MB/s eta 0:00:01
[K |████████████████████████▎ | 1.6MB 16.5MB/s eta 0:00:01
[K |████████████████████████▌ | 1.6MB 16.5MB/s eta 0:00:01
[K |████████████████████████▋ | 1.6MB 16.5MB/s eta 0:00:01
[K |████████████████████████▊ | 1.6MB 16.5MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 16.5MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 16.5MB/s eta 0:00:01
[K |█████████████████████████▎ | 1.6MB 16.5MB/s eta 0:00:01
[K |█████████████████████████▍ | 1.7MB 16.5MB/s eta 0:00:01
[K |█████████████████████████▌ | 1.7MB 16.5MB/s eta 0:00:01
[K |█████████████████████████▊ | 1.7MB 16.5MB/s eta 0:00:01
[K |█████████████████████████▉ | 1.7MB 16.5MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 16.5MB/s eta 0:00:01
[K |██████████████████████████▏ | 1.7MB 16.5MB/s eta 0:00:01
[K |██████████████████████████▍ | 1.7MB 16.5MB/s eta 0:00:01
[K |██████████████████████████▌ | 1.7MB 16.5MB/s eta 0:00:01
[K |██████████████████████████▋ | 1.7MB 16.5MB/s eta 0:00:01
[K |██████████████████████████▉ | 1.8MB 16.5MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 16.5MB/s eta 0:00:01
[K |███████████████████████████▏ | 1.8MB 16.5MB/s eta 0:00:01
[K |███████████████████████████▎ | 1.8MB 16.5MB/s eta 0:00:01
[K |███████████████████████████▍ | 1.8MB 16.5MB/s eta 0:00:01
[K |███████████████████████████▋ | 1.8MB 16.5MB/s eta 0:00:01
[K |███████████████████████████▊ | 1.8MB 16.5MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 16.5MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 16.5MB/s eta 0:00:01
[K |████████████████████████████▏ | 1.8MB 16.5MB/s eta 0:00:01
[K |████████████████████████████▍ | 1.9MB 16.5MB/s eta 0:00:01
[K |████████████████████████████▌ | 1.9MB 16.5MB/s eta 0:00:01
[K |████████████████████████████▊ | 1.9MB 16.5MB/s eta 0:00:01
[K |████████████████████████████▉ | 1.9MB 16.5MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 16.5MB/s eta 0:00:01
[K |█████████████████████████████▏ | 1.9MB 16.5MB/s eta 0:00:01
[K |█████████████████████████████▎ | 1.9MB 16.5MB/s eta 0:00:01
[K |█████████████████████████████▌ | 1.9MB 16.5MB/s eta 0:00:01
[K |█████████████████████████████▋ | 1.9MB 16.5MB/s eta 0:00:01
[K |█████████████████████████████▉ | 1.9MB 16.5MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 16.5MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 16.5MB/s eta 0:00:01
[K |██████████████████████████████▎ | 2.0MB 16.5MB/s eta 0:00:01
[K |██████████████████████████████▍ | 2.0MB 16.5MB/s eta 0:00:01
[K |██████████████████████████████▋ | 2.0MB 16.5MB/s eta 0:00:01
[K |██████████████████████████████▊ | 2.0MB 16.5MB/s eta 0:00:01
[K |██████████████████████████████▉ | 2.0MB 16.5MB/s eta 0:00:01
[K |███████████████████████████████ | 2.0MB 16.5MB/s eta 0:00:01
[K |███████████████████████████████▏| 2.0MB 16.5MB/s eta 0:00:01
[K |███████████████████████████████▍| 2.0MB 16.5MB/s eta 0:00:01
[K |███████████████████████████████▌| 2.1MB 16.5MB/s eta 0:00:01
[K |███████████████████████████████▊| 2.1MB 16.5MB/s eta 0:00:01
[K |███████████████████████████████▉| 2.1MB 16.5MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 16.5MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 16.5MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=b6c91ccafe91a17b330997976cd9d04434694bbc005924d4d4f1d2d72a8b0447
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.5787 - accuracy: 0.8419
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2138 - accuracy: 0.9404
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1586 - accuracy: 0.9549
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1215 - accuracy: 0.9663
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1039 - accuracy: 0.9716
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1016 - accuracy: 0.9700
Test accuracy: 0.9700000286102295
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. ** This gap between training accuracy and test accuracy is an example of *overfitting* ** , when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu), #Notice the 128 is the ouput size
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax) #Notice that it has to have the number of output channels
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 24) 5208
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 24) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 600) 0
_________________________________________________________________
dense_6 (Dense) (None, 128) 76928
_________________________________________________________________
dense_7 (Dense) (None, 10) 1290
=================================================================
Total params: 83,666
Trainable params: 83,666
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
BATCH_SIZE = 64
EPOCHS = 5
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.5692 - accuracy: 0.8222
Epoch 2/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0788 - accuracy: 0.9764
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0530 - accuracy: 0.9835
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0413 - accuracy: 0.9876
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0357 - accuracy: 0.9894
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0388 - accuracy: 0.9864
Test accuracy: 0.9864000082015991
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Do this with WB from spacetp Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |▏ | 10kB 23.3MB/s eta 0:00:01
[K |▎ | 20kB 6.2MB/s eta 0:00:01
[K |▌ | 30kB 7.7MB/s eta 0:00:01
[K |▋ | 40kB 8.4MB/s eta 0:00:01
[K |▉ | 51kB 7.2MB/s eta 0:00:01
[K |█ | 61kB 8.1MB/s eta 0:00:01
[K |█ | 71kB 8.3MB/s eta 0:00:01
[K |█▎ | 81kB 8.7MB/s eta 0:00:01
[K |█▍ | 92kB 8.1MB/s eta 0:00:01
[K |█▋ | 102kB 8.4MB/s eta 0:00:01
[K |█▊ | 112kB 8.4MB/s eta 0:00:01
[K |█▉ | 122kB 8.4MB/s eta 0:00:01
[K |██ | 133kB 8.4MB/s eta 0:00:01
[K |██▏ | 143kB 8.4MB/s eta 0:00:01
[K |██▍ | 153kB 8.4MB/s eta 0:00:01
[K |██▌ | 163kB 8.4MB/s eta 0:00:01
[K |██▊ | 174kB 8.4MB/s eta 0:00:01
[K |██▉ | 184kB 8.4MB/s eta 0:00:01
[K |███ | 194kB 8.4MB/s eta 0:00:01
[K |███▏ | 204kB 8.4MB/s eta 0:00:01
[K |███▎ | 215kB 8.4MB/s eta 0:00:01
[K |███▌ | 225kB 8.4MB/s eta 0:00:01
[K |███▋ | 235kB 8.4MB/s eta 0:00:01
[K |███▊ | 245kB 8.4MB/s eta 0:00:01
[K |████ | 256kB 8.4MB/s eta 0:00:01
[K |████ | 266kB 8.4MB/s eta 0:00:01
[K |████▎ | 276kB 8.4MB/s eta 0:00:01
[K |████▍ | 286kB 8.4MB/s eta 0:00:01
[K |████▋ | 296kB 8.4MB/s eta 0:00:01
[K |████▊ | 307kB 8.4MB/s eta 0:00:01
[K |████▉ | 317kB 8.4MB/s eta 0:00:01
[K |█████ | 327kB 8.4MB/s eta 0:00:01
[K |█████▏ | 337kB 8.4MB/s eta 0:00:01
[K |█████▍ | 348kB 8.4MB/s eta 0:00:01
[K |█████▌ | 358kB 8.4MB/s eta 0:00:01
[K |█████▋ | 368kB 8.4MB/s eta 0:00:01
[K |█████▉ | 378kB 8.4MB/s eta 0:00:01
[K |██████ | 389kB 8.4MB/s eta 0:00:01
[K |██████▏ | 399kB 8.4MB/s eta 0:00:01
[K |██████▎ | 409kB 8.4MB/s eta 0:00:01
[K |██████▌ | 419kB 8.4MB/s eta 0:00:01
[K |██████▋ | 430kB 8.4MB/s eta 0:00:01
[K |██████▊ | 440kB 8.4MB/s eta 0:00:01
[K |███████ | 450kB 8.4MB/s eta 0:00:01
[K |███████ | 460kB 8.4MB/s eta 0:00:01
[K |███████▎ | 471kB 8.4MB/s eta 0:00:01
[K |███████▍ | 481kB 8.4MB/s eta 0:00:01
[K |███████▌ | 491kB 8.4MB/s eta 0:00:01
[K |███████▊ | 501kB 8.4MB/s eta 0:00:01
[K |███████▉ | 512kB 8.4MB/s eta 0:00:01
[K |████████ | 522kB 8.4MB/s eta 0:00:01
[K |████████▏ | 532kB 8.4MB/s eta 0:00:01
[K |████████▍ | 542kB 8.4MB/s eta 0:00:01
[K |████████▌ | 552kB 8.4MB/s eta 0:00:01
[K |████████▋ | 563kB 8.4MB/s eta 0:00:01
[K |████████▉ | 573kB 8.4MB/s eta 0:00:01
[K |█████████ | 583kB 8.4MB/s eta 0:00:01
[K |█████████▏ | 593kB 8.4MB/s eta 0:00:01
[K |█████████▎ | 604kB 8.4MB/s eta 0:00:01
[K |█████████▍ | 614kB 8.4MB/s eta 0:00:01
[K |█████████▋ | 624kB 8.4MB/s eta 0:00:01
[K |█████████▊ | 634kB 8.4MB/s eta 0:00:01
[K |██████████ | 645kB 8.4MB/s eta 0:00:01
[K |██████████ | 655kB 8.4MB/s eta 0:00:01
[K |██████████▎ | 665kB 8.4MB/s eta 0:00:01
[K |██████████▍ | 675kB 8.4MB/s eta 0:00:01
[K |██████████▌ | 686kB 8.4MB/s eta 0:00:01
[K |██████████▊ | 696kB 8.4MB/s eta 0:00:01
[K |██████████▉ | 706kB 8.4MB/s eta 0:00:01
[K |███████████ | 716kB 8.4MB/s eta 0:00:01
[K |███████████▏ | 727kB 8.4MB/s eta 0:00:01
[K |███████████▎ | 737kB 8.4MB/s eta 0:00:01
[K |███████████▌ | 747kB 8.4MB/s eta 0:00:01
[K |███████████▋ | 757kB 8.4MB/s eta 0:00:01
[K |███████████▉ | 768kB 8.4MB/s eta 0:00:01
[K |████████████ | 778kB 8.4MB/s eta 0:00:01
[K |████████████ | 788kB 8.4MB/s eta 0:00:01
[K |████████████▎ | 798kB 8.4MB/s eta 0:00:01
[K |████████████▍ | 808kB 8.4MB/s eta 0:00:01
[K |████████████▋ | 819kB 8.4MB/s eta 0:00:01
[K |████████████▊ | 829kB 8.4MB/s eta 0:00:01
[K |█████████████ | 839kB 8.4MB/s eta 0:00:01
[K |█████████████ | 849kB 8.4MB/s eta 0:00:01
[K |█████████████▏ | 860kB 8.4MB/s eta 0:00:01
[K |█████████████▍ | 870kB 8.4MB/s eta 0:00:01
[K |█████████████▌ | 880kB 8.4MB/s eta 0:00:01
[K |█████████████▊ | 890kB 8.4MB/s eta 0:00:01
[K |█████████████▉ | 901kB 8.4MB/s eta 0:00:01
[K |██████████████ | 911kB 8.4MB/s eta 0:00:01
[K |██████████████▏ | 921kB 8.4MB/s eta 0:00:01
[K |██████████████▎ | 931kB 8.4MB/s eta 0:00:01
[K |██████████████▌ | 942kB 8.4MB/s eta 0:00:01
[K |██████████████▋ | 952kB 8.4MB/s eta 0:00:01
[K |██████████████▉ | 962kB 8.4MB/s eta 0:00:01
[K |███████████████ | 972kB 8.4MB/s eta 0:00:01
[K |███████████████ | 983kB 8.4MB/s eta 0:00:01
[K |███████████████▎ | 993kB 8.4MB/s eta 0:00:01
[K |███████████████▍ | 1.0MB 8.4MB/s eta 0:00:01
[K |███████████████▋ | 1.0MB 8.4MB/s eta 0:00:01
[K |███████████████▊ | 1.0MB 8.4MB/s eta 0:00:01
[K |███████████████▉ | 1.0MB 8.4MB/s eta 0:00:01
[K |████████████████ | 1.0MB 8.4MB/s eta 0:00:01
[K |████████████████▏ | 1.1MB 8.4MB/s eta 0:00:01
[K |████████████████▍ | 1.1MB 8.4MB/s eta 0:00:01
[K |████████████████▌ | 1.1MB 8.4MB/s eta 0:00:01
[K |████████████████▊ | 1.1MB 8.4MB/s eta 0:00:01
[K |████████████████▉ | 1.1MB 8.4MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 8.4MB/s eta 0:00:01
[K |█████████████████▏ | 1.1MB 8.4MB/s eta 0:00:01
[K |█████████████████▎ | 1.1MB 8.4MB/s eta 0:00:01
[K |█████████████████▌ | 1.1MB 8.4MB/s eta 0:00:01
[K |█████████████████▋ | 1.1MB 8.4MB/s eta 0:00:01
[K |█████████████████▊ | 1.2MB 8.4MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 8.4MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 8.4MB/s eta 0:00:01
[K |██████████████████▎ | 1.2MB 8.4MB/s eta 0:00:01
[K |██████████████████▍ | 1.2MB 8.4MB/s eta 0:00:01
[K |██████████████████▋ | 1.2MB 8.4MB/s eta 0:00:01
[K |██████████████████▊ | 1.2MB 8.4MB/s eta 0:00:01
[K |██████████████████▉ | 1.2MB 8.4MB/s eta 0:00:01
[K |███████████████████ | 1.2MB 8.4MB/s eta 0:00:01
[K |███████████████████▏ | 1.2MB 8.4MB/s eta 0:00:01
[K |███████████████████▍ | 1.3MB 8.4MB/s eta 0:00:01
[K |███████████████████▌ | 1.3MB 8.4MB/s eta 0:00:01
[K |███████████████████▋ | 1.3MB 8.4MB/s eta 0:00:01
[K |███████████████████▉ | 1.3MB 8.4MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 8.4MB/s eta 0:00:01
[K |████████████████████▏ | 1.3MB 8.4MB/s eta 0:00:01
[K |████████████████████▎ | 1.3MB 8.4MB/s eta 0:00:01
[K |████████████████████▌ | 1.3MB 8.4MB/s eta 0:00:01
[K |████████████████████▋ | 1.3MB 8.4MB/s eta 0:00:01
[K |████████████████████▊ | 1.4MB 8.4MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 8.4MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 8.4MB/s eta 0:00:01
[K |█████████████████████▎ | 1.4MB 8.4MB/s eta 0:00:01
[K |█████████████████████▍ | 1.4MB 8.4MB/s eta 0:00:01
[K |█████████████████████▌ | 1.4MB 8.4MB/s eta 0:00:01
[K |█████████████████████▊ | 1.4MB 8.4MB/s eta 0:00:01
[K |█████████████████████▉ | 1.4MB 8.4MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 8.4MB/s eta 0:00:01
[K |██████████████████████▏ | 1.4MB 8.4MB/s eta 0:00:01
[K |██████████████████████▍ | 1.5MB 8.4MB/s eta 0:00:01
[K |██████████████████████▌ | 1.5MB 8.4MB/s eta 0:00:01
[K |██████████████████████▋ | 1.5MB 8.4MB/s eta 0:00:01
[K |██████████████████████▉ | 1.5MB 8.4MB/s eta 0:00:01
[K |███████████████████████ | 1.5MB 8.4MB/s eta 0:00:01
[K |███████████████████████▏ | 1.5MB 8.4MB/s eta 0:00:01
[K |███████████████████████▎ | 1.5MB 8.4MB/s eta 0:00:01
[K |███████████████████████▍ | 1.5MB 8.4MB/s eta 0:00:01
[K |███████████████████████▋ | 1.5MB 8.4MB/s eta 0:00:01
[K |███████████████████████▊ | 1.5MB 8.4MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 8.4MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 8.4MB/s eta 0:00:01
[K |████████████████████████▏ | 1.6MB 8.4MB/s eta 0:00:01
[K |████████████████████████▍ | 1.6MB 8.4MB/s eta 0:00:01
[K |████████████████████████▌ | 1.6MB 8.4MB/s eta 0:00:01
[K |████████████████████████▊ | 1.6MB 8.4MB/s eta 0:00:01
[K |████████████████████████▉ | 1.6MB 8.4MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 8.4MB/s eta 0:00:01
[K |█████████████████████████▏ | 1.6MB 8.4MB/s eta 0:00:01
[K |█████████████████████████▎ | 1.6MB 8.4MB/s eta 0:00:01
[K |█████████████████████████▌ | 1.7MB 8.4MB/s eta 0:00:01
[K |█████████████████████████▋ | 1.7MB 8.4MB/s eta 0:00:01
[K |█████████████████████████▉ | 1.7MB 8.4MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 8.4MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 8.4MB/s eta 0:00:01
[K |██████████████████████████▎ | 1.7MB 8.4MB/s eta 0:00:01
[K |██████████████████████████▍ | 1.7MB 8.4MB/s eta 0:00:01
[K |██████████████████████████▋ | 1.7MB 8.4MB/s eta 0:00:01
[K |██████████████████████████▊ | 1.7MB 8.4MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 8.4MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 8.4MB/s eta 0:00:01
[K |███████████████████████████▏ | 1.8MB 8.4MB/s eta 0:00:01
[K |███████████████████████████▍ | 1.8MB 8.4MB/s eta 0:00:01
[K |███████████████████████████▌ | 1.8MB 8.4MB/s eta 0:00:01
[K |███████████████████████████▊ | 1.8MB 8.4MB/s eta 0:00:01
[K |███████████████████████████▉ | 1.8MB 8.4MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 8.4MB/s eta 0:00:01
[K |████████████████████████████▏ | 1.8MB 8.4MB/s eta 0:00:01
[K |████████████████████████████▎ | 1.8MB 8.4MB/s eta 0:00:01
[K |████████████████████████████▌ | 1.9MB 8.4MB/s eta 0:00:01
[K |████████████████████████████▋ | 1.9MB 8.4MB/s eta 0:00:01
[K |████████████████████████████▉ | 1.9MB 8.4MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 8.4MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 8.4MB/s eta 0:00:01
[K |█████████████████████████████▎ | 1.9MB 8.4MB/s eta 0:00:01
[K |█████████████████████████████▍ | 1.9MB 8.4MB/s eta 0:00:01
[K |█████████████████████████████▋ | 1.9MB 8.4MB/s eta 0:00:01
[K |█████████████████████████████▊ | 1.9MB 8.4MB/s eta 0:00:01
[K |█████████████████████████████▉ | 1.9MB 8.4MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 8.4MB/s eta 0:00:01
[K |██████████████████████████████▏ | 2.0MB 8.4MB/s eta 0:00:01
[K |██████████████████████████████▍ | 2.0MB 8.4MB/s eta 0:00:01
[K |██████████████████████████████▌ | 2.0MB 8.4MB/s eta 0:00:01
[K |██████████████████████████████▊ | 2.0MB 8.4MB/s eta 0:00:01
[K |██████████████████████████████▉ | 2.0MB 8.4MB/s eta 0:00:01
[K |███████████████████████████████ | 2.0MB 8.4MB/s eta 0:00:01
[K |███████████████████████████████▏| 2.0MB 8.4MB/s eta 0:00:01
[K |███████████████████████████████▎| 2.0MB 8.4MB/s eta 0:00:01
[K |███████████████████████████████▌| 2.0MB 8.4MB/s eta 0:00:01
[K |███████████████████████████████▋| 2.1MB 8.4MB/s eta 0:00:01
[K |███████████████████████████████▊| 2.1MB 8.4MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 8.4MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 8.4MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.5)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.2)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=e29958887b55eb74fe74409d4491a0f73f2a91c0cb18b6b53ebd511b9afa3a80
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
# '''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(len(set(train_labels)), activation= 'softmax'),
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.3630 - accuracy: 0.8982
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1938 - accuracy: 0.9449
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1475 - accuracy: 0.9576
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1206 - accuracy: 0.9654
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1015 - accuracy: 0.9715
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1067 - accuracy: 0.9686
Test accuracy: 0.9685999751091003
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, input_shape= (28, 28 ,1), kernel_size=(4,4)),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(12, kernel_size=(2, 2)),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
# '''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(len(set(train_labels)), activation= 'softmax'),
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_4"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 25, 25, 24) 408
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 12, 12, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 12) 1164
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 12) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 300) 0
_________________________________________________________________
dense_8 (Dense) (None, 128) 38528
_________________________________________________________________
dense_9 (Dense) (None, 10) 1290
=================================================================
Total params: 41,390
Trainable params: 41,390
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='adam', loss=tf.losses.sparse_categorical_crossentropy, metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, epochs=EPOCHS, batch_size=BATCH_SIZE)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.2523 - accuracy: 0.9259
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0751 - accuracy: 0.9762
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0516 - accuracy: 0.9837
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0400 - accuracy: 0.9874
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0298 - accuracy: 0.9906
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0400 - accuracy: 0.9872
Test accuracy: 0.9872000217437744
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions, axis=1)
print(prediction)
###Output
[7 2 1 ... 4 5 6]
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.Adam() # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits, from_logits=True) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
import os
checkpoint_dir = './training_checkpoints'
checkpoint_prefix = os.path.join(checkpoint_dir, "my_ckpt")
cnn_model.save_weights(checkpoint_prefix)
cnn_model.compile(optimizer = tf.keras.optimizers.Adam(), loss=tf.losses.sparse_categorical_crossentropy,metrics=['accuracy'] )
cnn_model.load_weights(tf.train.latest_checkpoint(checkpoint_dir))
cnn_model.build(tf.TensorShape([1, None]))
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.3213 - accuracy: 0.9572
Test accuracy: 0.9571999907493591
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 9.3MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.19.4)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114587 sha256=0f636484fce8708c131ccadb4381e078263ac625b9728df5b64f414020a22751
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.5),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.3),
tf.keras.layers.Dense(128, activation='relu'),
tf.keras.layers.Dropout(0.1),
tf.keras.layers.Dense(128, activation='relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model = build_fc_model()
model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 15
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/15
938/938 [==============================] - 3s 3ms/step - loss: 0.8517 - accuracy: 0.7174
Epoch 2/15
938/938 [==============================] - 2s 3ms/step - loss: 0.2946 - accuracy: 0.9132
Epoch 3/15
938/938 [==============================] - 2s 3ms/step - loss: 0.2372 - accuracy: 0.9304
Epoch 4/15
938/938 [==============================] - 2s 3ms/step - loss: 0.2157 - accuracy: 0.9363
Epoch 5/15
938/938 [==============================] - 2s 3ms/step - loss: 0.1911 - accuracy: 0.9428
Epoch 6/15
938/938 [==============================] - 2s 3ms/step - loss: 0.1859 - accuracy: 0.9455
Epoch 7/15
938/938 [==============================] - 2s 3ms/step - loss: 0.1700 - accuracy: 0.9496
Epoch 8/15
938/938 [==============================] - 2s 3ms/step - loss: 0.1556 - accuracy: 0.9528
Epoch 9/15
938/938 [==============================] - 3s 3ms/step - loss: 0.1518 - accuracy: 0.9544
Epoch 10/15
938/938 [==============================] - 3s 3ms/step - loss: 0.1517 - accuracy: 0.9541
Epoch 11/15
938/938 [==============================] - 2s 3ms/step - loss: 0.1409 - accuracy: 0.9575
Epoch 12/15
938/938 [==============================] - 3s 3ms/step - loss: 0.1406 - accuracy: 0.9589
Epoch 13/15
938/938 [==============================] - 3s 3ms/step - loss: 0.1334 - accuracy: 0.9612
Epoch 14/15
938/938 [==============================] - 3s 3ms/step - loss: 0.1279 - accuracy: 0.9628
Epoch 15/15
938/938 [==============================] - 2s 3ms/step - loss: 0.1315 - accuracy: 0.9615
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0809 - accuracy: 0.9764
Test accuracy: 0.9764000177383423
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3, 3), activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3, 3)),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_20"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_21 (Flatten) (None, 900) 0
_________________________________________________________________
dense_61 (Dense) (None, 128) 115328
_________________________________________________________________
dense_62 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
BATCH_SIZE = 64
EPOCHS = 5
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 4s 3ms/step - loss: 0.0367 - accuracy: 0.9888
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0284 - accuracy: 0.9912
Epoch 3/5
938/938 [==============================] - 3s 4ms/step - loss: 0.0222 - accuracy: 0.9927
Epoch 4/5
938/938 [==============================] - 3s 4ms/step - loss: 0.0173 - accuracy: 0.9942
Epoch 5/5
938/938 [==============================] - 4s 4ms/step - loss: 0.0133 - accuracy: 0.9955
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0380 - accuracy: 0.9904
Test accuracy: 0.9904000163078308
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 52 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |▏ | 10kB 24.4MB/s eta 0:00:01
[K |▎ | 20kB 29.1MB/s eta 0:00:01
[K |▌ | 30kB 34.4MB/s eta 0:00:01
[K |▋ | 40kB 33.5MB/s eta 0:00:01
[K |▉ | 51kB 31.4MB/s eta 0:00:01
[K |█ | 61kB 33.7MB/s eta 0:00:01
[K |█ | 71kB 25.3MB/s eta 0:00:01
[K |█▎ | 81kB 21.2MB/s eta 0:00:01
[K |█▍ | 92kB 22.7MB/s eta 0:00:01
[K |█▋ | 102kB 21.2MB/s eta 0:00:01
[K |█▊ | 112kB 21.2MB/s eta 0:00:01
[K |█▉ | 122kB 21.2MB/s eta 0:00:01
[K |██ | 133kB 21.2MB/s eta 0:00:01
[K |██▏ | 143kB 21.2MB/s eta 0:00:01
[K |██▍ | 153kB 21.2MB/s eta 0:00:01
[K |██▌ | 163kB 21.2MB/s eta 0:00:01
[K |██▊ | 174kB 21.2MB/s eta 0:00:01
[K |██▉ | 184kB 21.2MB/s eta 0:00:01
[K |███ | 194kB 21.2MB/s eta 0:00:01
[K |███▏ | 204kB 21.2MB/s eta 0:00:01
[K |███▎ | 215kB 21.2MB/s eta 0:00:01
[K |███▌ | 225kB 21.2MB/s eta 0:00:01
[K |███▋ | 235kB 21.2MB/s eta 0:00:01
[K |███▊ | 245kB 21.2MB/s eta 0:00:01
[K |████ | 256kB 21.2MB/s eta 0:00:01
[K |████ | 266kB 21.2MB/s eta 0:00:01
[K |████▎ | 276kB 21.2MB/s eta 0:00:01
[K |████▍ | 286kB 21.2MB/s eta 0:00:01
[K |████▋ | 296kB 21.2MB/s eta 0:00:01
[K |████▊ | 307kB 21.2MB/s eta 0:00:01
[K |████▉ | 317kB 21.2MB/s eta 0:00:01
[K |█████ | 327kB 21.2MB/s eta 0:00:01
[K |█████▏ | 337kB 21.2MB/s eta 0:00:01
[K |█████▍ | 348kB 21.2MB/s eta 0:00:01
[K |█████▌ | 358kB 21.2MB/s eta 0:00:01
[K |█████▋ | 368kB 21.2MB/s eta 0:00:01
[K |█████▉ | 378kB 21.2MB/s eta 0:00:01
[K |██████ | 389kB 21.2MB/s eta 0:00:01
[K |██████▏ | 399kB 21.2MB/s eta 0:00:01
[K |██████▎ | 409kB 21.2MB/s eta 0:00:01
[K |██████▌ | 419kB 21.2MB/s eta 0:00:01
[K |██████▋ | 430kB 21.2MB/s eta 0:00:01
[K |██████▊ | 440kB 21.2MB/s eta 0:00:01
[K |███████ | 450kB 21.2MB/s eta 0:00:01
[K |███████ | 460kB 21.2MB/s eta 0:00:01
[K |███████▎ | 471kB 21.2MB/s eta 0:00:01
[K |███████▍ | 481kB 21.2MB/s eta 0:00:01
[K |███████▌ | 491kB 21.2MB/s eta 0:00:01
[K |███████▊ | 501kB 21.2MB/s eta 0:00:01
[K |███████▉ | 512kB 21.2MB/s eta 0:00:01
[K |████████ | 522kB 21.2MB/s eta 0:00:01
[K |████████▏ | 532kB 21.2MB/s eta 0:00:01
[K |████████▍ | 542kB 21.2MB/s eta 0:00:01
[K |████████▌ | 552kB 21.2MB/s eta 0:00:01
[K |████████▋ | 563kB 21.2MB/s eta 0:00:01
[K |████████▉ | 573kB 21.2MB/s eta 0:00:01
[K |█████████ | 583kB 21.2MB/s eta 0:00:01
[K |█████████▏ | 593kB 21.2MB/s eta 0:00:01
[K |█████████▎ | 604kB 21.2MB/s eta 0:00:01
[K |█████████▍ | 614kB 21.2MB/s eta 0:00:01
[K |█████████▋ | 624kB 21.2MB/s eta 0:00:01
[K |█████████▊ | 634kB 21.2MB/s eta 0:00:01
[K |██████████ | 645kB 21.2MB/s eta 0:00:01
[K |██████████ | 655kB 21.2MB/s eta 0:00:01
[K |██████████▎ | 665kB 21.2MB/s eta 0:00:01
[K |██████████▍ | 675kB 21.2MB/s eta 0:00:01
[K |██████████▌ | 686kB 21.2MB/s eta 0:00:01
[K |██████████▊ | 696kB 21.2MB/s eta 0:00:01
[K |██████████▉ | 706kB 21.2MB/s eta 0:00:01
[K |███████████ | 716kB 21.2MB/s eta 0:00:01
[K |███████████▏ | 727kB 21.2MB/s eta 0:00:01
[K |███████████▎ | 737kB 21.2MB/s eta 0:00:01
[K |███████████▌ | 747kB 21.2MB/s eta 0:00:01
[K |███████████▋ | 757kB 21.2MB/s eta 0:00:01
[K |███████████▉ | 768kB 21.2MB/s eta 0:00:01
[K |████████████ | 778kB 21.2MB/s eta 0:00:01
[K |████████████ | 788kB 21.2MB/s eta 0:00:01
[K |████████████▎ | 798kB 21.2MB/s eta 0:00:01
[K |████████████▍ | 808kB 21.2MB/s eta 0:00:01
[K |████████████▋ | 819kB 21.2MB/s eta 0:00:01
[K |████████████▊ | 829kB 21.2MB/s eta 0:00:01
[K |█████████████ | 839kB 21.2MB/s eta 0:00:01
[K |█████████████ | 849kB 21.2MB/s eta 0:00:01
[K |█████████████▏ | 860kB 21.2MB/s eta 0:00:01
[K |█████████████▍ | 870kB 21.2MB/s eta 0:00:01
[K |█████████████▌ | 880kB 21.2MB/s eta 0:00:01
[K |█████████████▊ | 890kB 21.2MB/s eta 0:00:01
[K |█████████████▉ | 901kB 21.2MB/s eta 0:00:01
[K |██████████████ | 911kB 21.2MB/s eta 0:00:01
[K |██████████████▏ | 921kB 21.2MB/s eta 0:00:01
[K |██████████████▎ | 931kB 21.2MB/s eta 0:00:01
[K |██████████████▌ | 942kB 21.2MB/s eta 0:00:01
[K |██████████████▋ | 952kB 21.2MB/s eta 0:00:01
[K |██████████████▉ | 962kB 21.2MB/s eta 0:00:01
[K |███████████████ | 972kB 21.2MB/s eta 0:00:01
[K |███████████████ | 983kB 21.2MB/s eta 0:00:01
[K |███████████████▎ | 993kB 21.2MB/s eta 0:00:01
[K |███████████████▍ | 1.0MB 21.2MB/s eta 0:00:01
[K |███████████████▋ | 1.0MB 21.2MB/s eta 0:00:01
[K |███████████████▊ | 1.0MB 21.2MB/s eta 0:00:01
[K |███████████████▉ | 1.0MB 21.2MB/s eta 0:00:01
[K |████████████████ | 1.0MB 21.2MB/s eta 0:00:01
[K |████████████████▏ | 1.1MB 21.2MB/s eta 0:00:01
[K |████████████████▍ | 1.1MB 21.2MB/s eta 0:00:01
[K |████████████████▌ | 1.1MB 21.2MB/s eta 0:00:01
[K |████████████████▊ | 1.1MB 21.2MB/s eta 0:00:01
[K |████████████████▉ | 1.1MB 21.2MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 21.2MB/s eta 0:00:01
[K |█████████████████▏ | 1.1MB 21.2MB/s eta 0:00:01
[K |█████████████████▎ | 1.1MB 21.2MB/s eta 0:00:01
[K |█████████████████▌ | 1.1MB 21.2MB/s eta 0:00:01
[K |█████████████████▋ | 1.1MB 21.2MB/s eta 0:00:01
[K |█████████████████▊ | 1.2MB 21.2MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 21.2MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 21.2MB/s eta 0:00:01
[K |██████████████████▎ | 1.2MB 21.2MB/s eta 0:00:01
[K |██████████████████▍ | 1.2MB 21.2MB/s eta 0:00:01
[K |██████████████████▋ | 1.2MB 21.2MB/s eta 0:00:01
[K |██████████████████▊ | 1.2MB 21.2MB/s eta 0:00:01
[K |██████████████████▉ | 1.2MB 21.2MB/s eta 0:00:01
[K |███████████████████ | 1.2MB 21.2MB/s eta 0:00:01
[K |███████████████████▏ | 1.2MB 21.2MB/s eta 0:00:01
[K |███████████████████▍ | 1.3MB 21.2MB/s eta 0:00:01
[K |███████████████████▌ | 1.3MB 21.2MB/s eta 0:00:01
[K |███████████████████▋ | 1.3MB 21.2MB/s eta 0:00:01
[K |███████████████████▉ | 1.3MB 21.2MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 21.2MB/s eta 0:00:01
[K |████████████████████▏ | 1.3MB 21.2MB/s eta 0:00:01
[K |████████████████████▎ | 1.3MB 21.2MB/s eta 0:00:01
[K |████████████████████▌ | 1.3MB 21.2MB/s eta 0:00:01
[K |████████████████████▋ | 1.3MB 21.2MB/s eta 0:00:01
[K |████████████████████▊ | 1.4MB 21.2MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 21.2MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 21.2MB/s eta 0:00:01
[K |█████████████████████▎ | 1.4MB 21.2MB/s eta 0:00:01
[K |█████████████████████▍ | 1.4MB 21.2MB/s eta 0:00:01
[K |█████████████████████▌ | 1.4MB 21.2MB/s eta 0:00:01
[K |█████████████████████▊ | 1.4MB 21.2MB/s eta 0:00:01
[K |█████████████████████▉ | 1.4MB 21.2MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 21.2MB/s eta 0:00:01
[K |██████████████████████▏ | 1.4MB 21.2MB/s eta 0:00:01
[K |██████████████████████▍ | 1.5MB 21.2MB/s eta 0:00:01
[K |██████████████████████▌ | 1.5MB 21.2MB/s eta 0:00:01
[K |██████████████████████▋ | 1.5MB 21.2MB/s eta 0:00:01
[K |██████████████████████▉ | 1.5MB 21.2MB/s eta 0:00:01
[K |███████████████████████ | 1.5MB 21.2MB/s eta 0:00:01
[K |███████████████████████▏ | 1.5MB 21.2MB/s eta 0:00:01
[K |███████████████████████▎ | 1.5MB 21.2MB/s eta 0:00:01
[K |███████████████████████▍ | 1.5MB 21.2MB/s eta 0:00:01
[K |███████████████████████▋ | 1.5MB 21.2MB/s eta 0:00:01
[K |███████████████████████▊ | 1.5MB 21.2MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 21.2MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 21.2MB/s eta 0:00:01
[K |████████████████████████▏ | 1.6MB 21.2MB/s eta 0:00:01
[K |████████████████████████▍ | 1.6MB 21.2MB/s eta 0:00:01
[K |████████████████████████▌ | 1.6MB 21.2MB/s eta 0:00:01
[K |████████████████████████▊ | 1.6MB 21.2MB/s eta 0:00:01
[K |████████████████████████▉ | 1.6MB 21.2MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 21.2MB/s eta 0:00:01
[K |█████████████████████████▏ | 1.6MB 21.2MB/s eta 0:00:01
[K |█████████████████████████▎ | 1.6MB 21.2MB/s eta 0:00:01
[K |█████████████████████████▌ | 1.7MB 21.2MB/s eta 0:00:01
[K |█████████████████████████▋ | 1.7MB 21.2MB/s eta 0:00:01
[K |█████████████████████████▉ | 1.7MB 21.2MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 21.2MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 21.2MB/s eta 0:00:01
[K |██████████████████████████▎ | 1.7MB 21.2MB/s eta 0:00:01
[K |██████████████████████████▍ | 1.7MB 21.2MB/s eta 0:00:01
[K |██████████████████████████▋ | 1.7MB 21.2MB/s eta 0:00:01
[K |██████████████████████████▊ | 1.7MB 21.2MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 21.2MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 21.2MB/s eta 0:00:01
[K |███████████████████████████▏ | 1.8MB 21.2MB/s eta 0:00:01
[K |███████████████████████████▍ | 1.8MB 21.2MB/s eta 0:00:01
[K |███████████████████████████▌ | 1.8MB 21.2MB/s eta 0:00:01
[K |███████████████████████████▊ | 1.8MB 21.2MB/s eta 0:00:01
[K |███████████████████████████▉ | 1.8MB 21.2MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 21.2MB/s eta 0:00:01
[K |████████████████████████████▏ | 1.8MB 21.2MB/s eta 0:00:01
[K |████████████████████████████▎ | 1.8MB 21.2MB/s eta 0:00:01
[K |████████████████████████████▌ | 1.9MB 21.2MB/s eta 0:00:01
[K |████████████████████████████▋ | 1.9MB 21.2MB/s eta 0:00:01
[K |████████████████████████████▉ | 1.9MB 21.2MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 21.2MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 21.2MB/s eta 0:00:01
[K |█████████████████████████████▎ | 1.9MB 21.2MB/s eta 0:00:01
[K |█████████████████████████████▍ | 1.9MB 21.2MB/s eta 0:00:01
[K |█████████████████████████████▋ | 1.9MB 21.2MB/s eta 0:00:01
[K |█████████████████████████████▊ | 1.9MB 21.2MB/s eta 0:00:01
[K |█████████████████████████████▉ | 1.9MB 21.2MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 21.2MB/s eta 0:00:01
[K |██████████████████████████████▏ | 2.0MB 21.2MB/s eta 0:00:01
[K |██████████████████████████████▍ | 2.0MB 21.2MB/s eta 0:00:01
[K |██████████████████████████████▌ | 2.0MB 21.2MB/s eta 0:00:01
[K |██████████████████████████████▊ | 2.0MB 21.2MB/s eta 0:00:01
[K |██████████████████████████████▉ | 2.0MB 21.2MB/s eta 0:00:01
[K |███████████████████████████████ | 2.0MB 21.2MB/s eta 0:00:01
[K |███████████████████████████████▏| 2.0MB 21.2MB/s eta 0:00:01
[K |███████████████████████████████▎| 2.0MB 21.2MB/s eta 0:00:01
[K |███████████████████████████████▌| 2.0MB 21.2MB/s eta 0:00:01
[K |███████████████████████████████▋| 2.1MB 21.2MB/s eta 0:00:01
[K |███████████████████████████████▊| 2.1MB 21.2MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 21.2MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 21.2MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.5)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114585 sha256=6bd64435b2218856a1e3c9e45c8fe5e5485636d49af4c630853abebe3da55f50
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
[[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
...
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]
[[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
...
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]
[0 0 0 ... 0 0 0]]]
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.3630 - accuracy: 0.8982
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1964 - accuracy: 0.9437
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1493 - accuracy: 0.9572
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1204 - accuracy: 0.9656
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1019 - accuracy: 0.9707
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 0s 1ms/step - loss: 0.1052 - accuracy: 0.9682
Test accuracy: 0.9682000279426575
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax),
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 900) 0
_________________________________________________________________
dense_4 (Dense) (None, 128) 115328
_________________________________________________________________
dense_5 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-3), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(x=train_images, y=train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0138 - accuracy: 0.9958
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0122 - accuracy: 0.9966
Epoch 3/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0114 - accuracy: 0.9968
Epoch 4/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0110 - accuracy: 0.9970
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0107 - accuracy: 0.9971
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(x=test_images, y=test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1052 - accuracy: 0.9682
Test accuracy: 0.9682000279426575
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 42 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model.call(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
TensorFlow 2.x selected.
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 2.8MB/s
[?25hRequirement already satisfied: numpy in /tensorflow-2.1.0/python3.6 (from mitdeeplearning) (1.18.1)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.28.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.15.6)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.10)
Requirement already satisfied: scipy in /tensorflow-2.1.0/python3.6 (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle~=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.2.2)
Requirement already satisfied: six in /tensorflow-2.1.0/python3.6 (from gym->mitdeeplearning) (1.14.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=1fc56845e3d197d62e97abd897898c7241bca00c86eb5129546c07e1acf27db3
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation= 'softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 70us/sample - loss: 0.3772 - accuracy: 0.8952
Epoch 2/5
60000/60000 [==============================] - 2s 36us/sample - loss: 0.2026 - accuracy: 0.9421
Epoch 3/5
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1517 - accuracy: 0.9568
Epoch 4/5
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1216 - accuracy: 0.9656
Epoch 5/5
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1012 - accuracy: 0.9717
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
10000/10000 [==============================] - 1s 65us/sample - loss: 0.1032 - accuracy: 0.9691
Test accuracy: 0.9691
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple 0
_________________________________________________________________
conv2d_1 (Conv2D) multiple 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_1 (Flatten) multiple 0
_________________________________________________________________
dense_2 (Dense) multiple 115328
_________________________________________________________________
dense_3 (Dense) multiple 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 59us/sample - loss: 0.1761 - accuracy: 0.9474
Epoch 2/5
60000/60000 [==============================] - 3s 49us/sample - loss: 0.0532 - accuracy: 0.9832
Epoch 3/5
60000/60000 [==============================] - 3s 53us/sample - loss: 0.0359 - accuracy: 0.9891
Epoch 4/5
60000/60000 [==============================] - 3s 52us/sample - loss: 0.0278 - accuracy: 0.9912
Epoch 5/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.0208 - accuracy: 0.9933
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
10000/10000 [==============================] - 1s 76us/sample - loss: 0.0324 - accuracy: 0.9900
Test accuracy: 0.99
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0]) # TODO
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 6 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)# TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables) # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
Downloading mitdeeplearning-0.2.0.tar.gz (2.1 MB)
[K |████████████████████████████████| 2.1 MB 12.3 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.21.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.63.0)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-py3-none-any.whl size=2115442 sha256=980379039749e11455610d29f5d0b5aaccb513212f3f30010f9f26b450df8c08
Stored in directory: /root/.cache/pip/wheels/9a/b9/4f/99b7c8c5c75355550b83e1fcfc02956fb40c35eb01e2262877
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
print(train_images.shape)
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
print(train_images.shape)
train_labels = (train_labels).astype(np.int64)
print(train_labels.shape)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step
11501568/11490434 [==============================] - 1s 0us/step
(60000, 28, 28)
(60000, 28, 28, 1)
(60000,)
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 7s 5ms/step - loss: 0.3707 - accuracy: 0.8971
Epoch 2/5
938/938 [==============================] - 5s 5ms/step - loss: 0.2018 - accuracy: 0.9415
Epoch 3/5
938/938 [==============================] - 6s 6ms/step - loss: 0.1520 - accuracy: 0.9560
Epoch 4/5
938/938 [==============================] - 5s 6ms/step - loss: 0.1237 - accuracy: 0.9641
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1041 - accuracy: 0.9699
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.1054 - accuracy: 0.9695
Test accuracy: 0.9695000052452087
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, 3),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(2, 2),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, 3),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(2, 2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_12 (Conv2D) (None, 26, 26, 24) 240
max_pooling2d_12 (MaxPoolin (None, 13, 13, 24) 0
g2D)
conv2d_13 (Conv2D) (None, 11, 11, 36) 7812
max_pooling2d_13 (MaxPoolin (None, 5, 5, 36) 0
g2D)
flatten_7 (Flatten) (None, 900) 0
dense_14 (Dense) (None, 128) 115328
dense_15 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 6s 6ms/step - loss: 0.1738 - accuracy: 0.9502
Epoch 2/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0535 - accuracy: 0.9839
Epoch 3/5
938/938 [==============================] - 5s 6ms/step - loss: 0.0357 - accuracy: 0.9886
Epoch 4/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0257 - accuracy: 0.9919
Epoch 5/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0202 - accuracy: 0.9937
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.0427 - accuracy: 0.9888
Test accuracy: 0.9887999892234802
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 18 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits)
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
Downloading mitdeeplearning-0.2.0.tar.gz (2.1 MB)
[K |████████████████████████████████| 2.1 MB 5.2 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.62.3)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-py3-none-any.whl size=2115442 sha256=0242532d948bbb2ea817cf517be28759fca47a7016919efdb9c9d80e97103022
Stored in directory: /root/.cache/pip/wheels/9a/b9/4f/99b7c8c5c75355550b83e1fcfc02956fb40c35eb01e2262877
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
# print(random_inds)
# print(train_images[image_ind])
# print(np.squeeze(train_images[image_ind]))
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
# print(image_ind)
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 6s 3ms/step - loss: 0.3636 - accuracy: 0.8999
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1924 - accuracy: 0.9454
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1450 - accuracy: 0.9588
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1184 - accuracy: 0.9661
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1006 - accuracy: 0.9712
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.0996 - accuracy: 0.9688
Test accuracy: 0.9688000082969666
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# tf.keras.layers.MaxPool2D('''TODO''')
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu),
# tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# tf.keras.layers.MaxPool2D('''TODO''')
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
# [TODO Dense layer to output classification probabilities]
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
max_pooling2d (MaxPooling2D (None, 13, 13, 24) 0
)
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
max_pooling2d_1 (MaxPooling (None, 5, 5, 36) 0
2D)
flatten_1 (Flatten) (None, 900) 0
dense_2 (Dense) (None, 128) 115328
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
BATCH_SIZE = 64
EPOCHS = 5
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 8s 6ms/step - loss: 0.2503 - accuracy: 0.9206
Epoch 2/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0720 - accuracy: 0.9776
Epoch 3/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0506 - accuracy: 0.9840
Epoch 4/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0394 - accuracy: 0.9876
Epoch 5/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0317 - accuracy: 0.9901
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 4ms/step - loss: 0.0297 - accuracy: 0.9901
Test accuracy: 0.9901000261306763
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 80 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits)
# loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |▏ | 10kB 24.4MB/s eta 0:00:01
[K |▎ | 20kB 3.3MB/s eta 0:00:01
[K |▌ | 30kB 4.4MB/s eta 0:00:01
[K |▋ | 40kB 4.7MB/s eta 0:00:01
[K |▉ | 51kB 3.9MB/s eta 0:00:01
[K |█ | 61kB 4.4MB/s eta 0:00:01
[K |█ | 71kB 4.6MB/s eta 0:00:01
[K |█▎ | 81kB 5.2MB/s eta 0:00:01
[K |█▍ | 92kB 5.5MB/s eta 0:00:01
[K |█▋ | 102kB 5.3MB/s eta 0:00:01
[K |█▊ | 112kB 5.3MB/s eta 0:00:01
[K |█▉ | 122kB 5.3MB/s eta 0:00:01
[K |██ | 133kB 5.3MB/s eta 0:00:01
[K |██▏ | 143kB 5.3MB/s eta 0:00:01
[K |██▍ | 153kB 5.3MB/s eta 0:00:01
[K |██▌ | 163kB 5.3MB/s eta 0:00:01
[K |██▊ | 174kB 5.3MB/s eta 0:00:01
[K |██▉ | 184kB 5.3MB/s eta 0:00:01
[K |███ | 194kB 5.3MB/s eta 0:00:01
[K |███▏ | 204kB 5.3MB/s eta 0:00:01
[K |███▎ | 215kB 5.3MB/s eta 0:00:01
[K |███▌ | 225kB 5.3MB/s eta 0:00:01
[K |███▋ | 235kB 5.3MB/s eta 0:00:01
[K |███▊ | 245kB 5.3MB/s eta 0:00:01
[K |████ | 256kB 5.3MB/s eta 0:00:01
[K |████ | 266kB 5.3MB/s eta 0:00:01
[K |████▎ | 276kB 5.3MB/s eta 0:00:01
[K |████▍ | 286kB 5.3MB/s eta 0:00:01
[K |████▋ | 296kB 5.3MB/s eta 0:00:01
[K |████▊ | 307kB 5.3MB/s eta 0:00:01
[K |████▉ | 317kB 5.3MB/s eta 0:00:01
[K |█████ | 327kB 5.3MB/s eta 0:00:01
[K |█████▏ | 337kB 5.3MB/s eta 0:00:01
[K |█████▍ | 348kB 5.3MB/s eta 0:00:01
[K |█████▌ | 358kB 5.3MB/s eta 0:00:01
[K |█████▋ | 368kB 5.3MB/s eta 0:00:01
[K |█████▉ | 378kB 5.3MB/s eta 0:00:01
[K |██████ | 389kB 5.3MB/s eta 0:00:01
[K |██████▏ | 399kB 5.3MB/s eta 0:00:01
[K |██████▎ | 409kB 5.3MB/s eta 0:00:01
[K |██████▌ | 419kB 5.3MB/s eta 0:00:01
[K |██████▋ | 430kB 5.3MB/s eta 0:00:01
[K |██████▊ | 440kB 5.3MB/s eta 0:00:01
[K |███████ | 450kB 5.3MB/s eta 0:00:01
[K |███████ | 460kB 5.3MB/s eta 0:00:01
[K |███████▎ | 471kB 5.3MB/s eta 0:00:01
[K |███████▍ | 481kB 5.3MB/s eta 0:00:01
[K |███████▌ | 491kB 5.3MB/s eta 0:00:01
[K |███████▊ | 501kB 5.3MB/s eta 0:00:01
[K |███████▉ | 512kB 5.3MB/s eta 0:00:01
[K |████████ | 522kB 5.3MB/s eta 0:00:01
[K |████████▏ | 532kB 5.3MB/s eta 0:00:01
[K |████████▍ | 542kB 5.3MB/s eta 0:00:01
[K |████████▌ | 552kB 5.3MB/s eta 0:00:01
[K |████████▋ | 563kB 5.3MB/s eta 0:00:01
[K |████████▉ | 573kB 5.3MB/s eta 0:00:01
[K |█████████ | 583kB 5.3MB/s eta 0:00:01
[K |█████████▏ | 593kB 5.3MB/s eta 0:00:01
[K |█████████▎ | 604kB 5.3MB/s eta 0:00:01
[K |█████████▍ | 614kB 5.3MB/s eta 0:00:01
[K |█████████▋ | 624kB 5.3MB/s eta 0:00:01
[K |█████████▊ | 634kB 5.3MB/s eta 0:00:01
[K |██████████ | 645kB 5.3MB/s eta 0:00:01
[K |██████████ | 655kB 5.3MB/s eta 0:00:01
[K |██████████▎ | 665kB 5.3MB/s eta 0:00:01
[K |██████████▍ | 675kB 5.3MB/s eta 0:00:01
[K |██████████▌ | 686kB 5.3MB/s eta 0:00:01
[K |██████████▊ | 696kB 5.3MB/s eta 0:00:01
[K |██████████▉ | 706kB 5.3MB/s eta 0:00:01
[K |███████████ | 716kB 5.3MB/s eta 0:00:01
[K |███████████▏ | 727kB 5.3MB/s eta 0:00:01
[K |███████████▎ | 737kB 5.3MB/s eta 0:00:01
[K |███████████▌ | 747kB 5.3MB/s eta 0:00:01
[K |███████████▋ | 757kB 5.3MB/s eta 0:00:01
[K |███████████▉ | 768kB 5.3MB/s eta 0:00:01
[K |████████████ | 778kB 5.3MB/s eta 0:00:01
[K |████████████ | 788kB 5.3MB/s eta 0:00:01
[K |████████████▎ | 798kB 5.3MB/s eta 0:00:01
[K |████████████▍ | 808kB 5.3MB/s eta 0:00:01
[K |████████████▋ | 819kB 5.3MB/s eta 0:00:01
[K |████████████▊ | 829kB 5.3MB/s eta 0:00:01
[K |█████████████ | 839kB 5.3MB/s eta 0:00:01
[K |█████████████ | 849kB 5.3MB/s eta 0:00:01
[K |█████████████▏ | 860kB 5.3MB/s eta 0:00:01
[K |█████████████▍ | 870kB 5.3MB/s eta 0:00:01
[K |█████████████▌ | 880kB 5.3MB/s eta 0:00:01
[K |█████████████▊ | 890kB 5.3MB/s eta 0:00:01
[K |█████████████▉ | 901kB 5.3MB/s eta 0:00:01
[K |██████████████ | 911kB 5.3MB/s eta 0:00:01
[K |██████████████▏ | 921kB 5.3MB/s eta 0:00:01
[K |██████████████▎ | 931kB 5.3MB/s eta 0:00:01
[K |██████████████▌ | 942kB 5.3MB/s eta 0:00:01
[K |██████████████▋ | 952kB 5.3MB/s eta 0:00:01
[K |██████████████▉ | 962kB 5.3MB/s eta 0:00:01
[K |███████████████ | 972kB 5.3MB/s eta 0:00:01
[K |███████████████ | 983kB 5.3MB/s eta 0:00:01
[K |███████████████▎ | 993kB 5.3MB/s eta 0:00:01
[K |███████████████▍ | 1.0MB 5.3MB/s eta 0:00:01
[K |███████████████▋ | 1.0MB 5.3MB/s eta 0:00:01
[K |███████████████▊ | 1.0MB 5.3MB/s eta 0:00:01
[K |███████████████▉ | 1.0MB 5.3MB/s eta 0:00:01
[K |████████████████ | 1.0MB 5.3MB/s eta 0:00:01
[K |████████████████▏ | 1.1MB 5.3MB/s eta 0:00:01
[K |████████████████▍ | 1.1MB 5.3MB/s eta 0:00:01
[K |████████████████▌ | 1.1MB 5.3MB/s eta 0:00:01
[K |████████████████▊ | 1.1MB 5.3MB/s eta 0:00:01
[K |████████████████▉ | 1.1MB 5.3MB/s eta 0:00:01
[K |█████████████████ | 1.1MB 5.3MB/s eta 0:00:01
[K |█████████████████▏ | 1.1MB 5.3MB/s eta 0:00:01
[K |█████████████████▎ | 1.1MB 5.3MB/s eta 0:00:01
[K |█████████████████▌ | 1.1MB 5.3MB/s eta 0:00:01
[K |█████████████████▋ | 1.1MB 5.3MB/s eta 0:00:01
[K |█████████████████▊ | 1.2MB 5.3MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 5.3MB/s eta 0:00:01
[K |██████████████████ | 1.2MB 5.3MB/s eta 0:00:01
[K |██████████████████▎ | 1.2MB 5.3MB/s eta 0:00:01
[K |██████████████████▍ | 1.2MB 5.3MB/s eta 0:00:01
[K |██████████████████▋ | 1.2MB 5.3MB/s eta 0:00:01
[K |██████████████████▊ | 1.2MB 5.3MB/s eta 0:00:01
[K |██████████████████▉ | 1.2MB 5.3MB/s eta 0:00:01
[K |███████████████████ | 1.2MB 5.3MB/s eta 0:00:01
[K |███████████████████▏ | 1.2MB 5.3MB/s eta 0:00:01
[K |███████████████████▍ | 1.3MB 5.3MB/s eta 0:00:01
[K |███████████████████▌ | 1.3MB 5.3MB/s eta 0:00:01
[K |███████████████████▋ | 1.3MB 5.3MB/s eta 0:00:01
[K |███████████████████▉ | 1.3MB 5.3MB/s eta 0:00:01
[K |████████████████████ | 1.3MB 5.3MB/s eta 0:00:01
[K |████████████████████▏ | 1.3MB 5.3MB/s eta 0:00:01
[K |████████████████████▎ | 1.3MB 5.3MB/s eta 0:00:01
[K |████████████████████▌ | 1.3MB 5.3MB/s eta 0:00:01
[K |████████████████████▋ | 1.3MB 5.3MB/s eta 0:00:01
[K |████████████████████▊ | 1.4MB 5.3MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 5.3MB/s eta 0:00:01
[K |█████████████████████ | 1.4MB 5.3MB/s eta 0:00:01
[K |█████████████████████▎ | 1.4MB 5.3MB/s eta 0:00:01
[K |█████████████████████▍ | 1.4MB 5.3MB/s eta 0:00:01
[K |█████████████████████▌ | 1.4MB 5.3MB/s eta 0:00:01
[K |█████████████████████▊ | 1.4MB 5.3MB/s eta 0:00:01
[K |█████████████████████▉ | 1.4MB 5.3MB/s eta 0:00:01
[K |██████████████████████ | 1.4MB 5.3MB/s eta 0:00:01
[K |██████████████████████▏ | 1.4MB 5.3MB/s eta 0:00:01
[K |██████████████████████▍ | 1.5MB 5.3MB/s eta 0:00:01
[K |██████████████████████▌ | 1.5MB 5.3MB/s eta 0:00:01
[K |██████████████████████▋ | 1.5MB 5.3MB/s eta 0:00:01
[K |██████████████████████▉ | 1.5MB 5.3MB/s eta 0:00:01
[K |███████████████████████ | 1.5MB 5.3MB/s eta 0:00:01
[K |███████████████████████▏ | 1.5MB 5.3MB/s eta 0:00:01
[K |███████████████████████▎ | 1.5MB 5.3MB/s eta 0:00:01
[K |███████████████████████▍ | 1.5MB 5.3MB/s eta 0:00:01
[K |███████████████████████▋ | 1.5MB 5.3MB/s eta 0:00:01
[K |███████████████████████▊ | 1.5MB 5.3MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 5.3MB/s eta 0:00:01
[K |████████████████████████ | 1.6MB 5.3MB/s eta 0:00:01
[K |████████████████████████▏ | 1.6MB 5.3MB/s eta 0:00:01
[K |████████████████████████▍ | 1.6MB 5.3MB/s eta 0:00:01
[K |████████████████████████▌ | 1.6MB 5.3MB/s eta 0:00:01
[K |████████████████████████▊ | 1.6MB 5.3MB/s eta 0:00:01
[K |████████████████████████▉ | 1.6MB 5.3MB/s eta 0:00:01
[K |█████████████████████████ | 1.6MB 5.3MB/s eta 0:00:01
[K |█████████████████████████▏ | 1.6MB 5.3MB/s eta 0:00:01
[K |█████████████████████████▎ | 1.6MB 5.3MB/s eta 0:00:01
[K |█████████████████████████▌ | 1.7MB 5.3MB/s eta 0:00:01
[K |█████████████████████████▋ | 1.7MB 5.3MB/s eta 0:00:01
[K |█████████████████████████▉ | 1.7MB 5.3MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 5.3MB/s eta 0:00:01
[K |██████████████████████████ | 1.7MB 5.3MB/s eta 0:00:01
[K |██████████████████████████▎ | 1.7MB 5.3MB/s eta 0:00:01
[K |██████████████████████████▍ | 1.7MB 5.3MB/s eta 0:00:01
[K |██████████████████████████▋ | 1.7MB 5.3MB/s eta 0:00:01
[K |██████████████████████████▊ | 1.7MB 5.3MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 5.3MB/s eta 0:00:01
[K |███████████████████████████ | 1.8MB 5.3MB/s eta 0:00:01
[K |███████████████████████████▏ | 1.8MB 5.3MB/s eta 0:00:01
[K |███████████████████████████▍ | 1.8MB 5.3MB/s eta 0:00:01
[K |███████████████████████████▌ | 1.8MB 5.3MB/s eta 0:00:01
[K |███████████████████████████▊ | 1.8MB 5.3MB/s eta 0:00:01
[K |███████████████████████████▉ | 1.8MB 5.3MB/s eta 0:00:01
[K |████████████████████████████ | 1.8MB 5.3MB/s eta 0:00:01
[K |████████████████████████████▏ | 1.8MB 5.3MB/s eta 0:00:01
[K |████████████████████████████▎ | 1.8MB 5.3MB/s eta 0:00:01
[K |████████████████████████████▌ | 1.9MB 5.3MB/s eta 0:00:01
[K |████████████████████████████▋ | 1.9MB 5.3MB/s eta 0:00:01
[K |████████████████████████████▉ | 1.9MB 5.3MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 5.3MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9MB 5.3MB/s eta 0:00:01
[K |█████████████████████████████▎ | 1.9MB 5.3MB/s eta 0:00:01
[K |█████████████████████████████▍ | 1.9MB 5.3MB/s eta 0:00:01
[K |█████████████████████████████▋ | 1.9MB 5.3MB/s eta 0:00:01
[K |█████████████████████████████▊ | 1.9MB 5.3MB/s eta 0:00:01
[K |█████████████████████████████▉ | 1.9MB 5.3MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0MB 5.3MB/s eta 0:00:01
[K |██████████████████████████████▏ | 2.0MB 5.3MB/s eta 0:00:01
[K |██████████████████████████████▍ | 2.0MB 5.3MB/s eta 0:00:01
[K |██████████████████████████████▌ | 2.0MB 5.3MB/s eta 0:00:01
[K |██████████████████████████████▊ | 2.0MB 5.3MB/s eta 0:00:01
[K |██████████████████████████████▉ | 2.0MB 5.3MB/s eta 0:00:01
[K |███████████████████████████████ | 2.0MB 5.3MB/s eta 0:00:01
[K |███████████████████████████████▏| 2.0MB 5.3MB/s eta 0:00:01
[K |███████████████████████████████▎| 2.0MB 5.3MB/s eta 0:00:01
[K |███████████████████████████████▌| 2.0MB 5.3MB/s eta 0:00:01
[K |███████████████████████████████▋| 2.1MB 5.3MB/s eta 0:00:01
[K |███████████████████████████████▊| 2.1MB 5.3MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 5.3MB/s eta 0:00:01
[K |████████████████████████████████| 2.1MB 5.3MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.5)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.2)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114585 sha256=3a496fbe8faa9c0fef74139862e4764241121153772347ca4fce873f9d81fe49
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation='relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10,activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.3663 - accuracy: 0.8985
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1961 - accuracy: 0.9445
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1503 - accuracy: 0.9571
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1222 - accuracy: 0.9656
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1033 - accuracy: 0.9711
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1076 - accuracy: 0.9687
Test accuracy: 0.9686999917030334
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24,(3,3)),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36,(3,3)),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10,activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 900) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 115328
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images,train_labels)
###Output
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1380 - accuracy: 0.9591
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images,test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1076 - accuracy: 0.9687
Test accuracy: 0.9686999917030334
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 88 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels,logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value,cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
print(tf.config.list_physical_devices)
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, kernel_size=3, activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(3,3), strides=2),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, kernel_size=3, activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=2),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, BATCH_SIZE, EPOCHS)
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 50 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
from tensorflow.python.client import device_lib
for device in device_lib.list_local_devices():
if device.device_type=="GPU":
print(device.physical_device_desc)
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it: Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 4s 2ms/step - loss: 0.5771 - accuracy: 0.8427
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2159 - accuracy: 0.9371
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1586 - accuracy: 0.9544
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1292 - accuracy: 0.9634
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1082 - accuracy: 0.9691
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 0s 2ms/step - loss: 0.1003 - accuracy: 0.9697
Test accuracy: 0.9696999788284302
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, kernel_size=3),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, kernel_size=3),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 900) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 115328
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adagrad (learning_rate=1e-3),
loss='sparse_categorical_crossentropy',
metrics=['accuracy']) # Frank: pretty good
# cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3),
# loss='sparse_categorical_crossentropy',
# metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0048 - accuracy: 0.9986
Epoch 2/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0048 - accuracy: 0.9986
Epoch 3/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0044 - accuracy: 0.9987
Epoch 4/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0039 - accuracy: 0.9988
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0046 - accuracy: 0.9987
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0386 - accuracy: 0.9900
Test accuracy: 0.9900000095367432
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 30 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits, from_logits=True)
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
1.5 ConclusionIn this part of the lab, you had the chance to play with different MNIST classifiers with different architectures (fully-connected layers only, CNN), and experiment with how different hyperparameters affect accuracy (learning rate, etc.). The next part of the lab explores another application of CNNs, facial detection, and some drawbacks of AI systems in real world applications, like issues of bias.
###Code
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 17.8MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=e711273d15c99ae99c63779fdc9edece376462a28462ed3b897bd64565fc50e1
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'sigmoid'),
tf.keras.layers.Dense(128, activation= 'sigmoid'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2536 - accuracy: 0.9255
Epoch 2/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2346 - accuracy: 0.9314
Epoch 3/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2181 - accuracy: 0.9364
Epoch 4/10
938/938 [==============================] - 2s 2ms/step - loss: 0.2034 - accuracy: 0.9417
Epoch 5/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1905 - accuracy: 0.9452
Epoch 6/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1788 - accuracy: 0.9480
Epoch 7/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1682 - accuracy: 0.9508
Epoch 8/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1591 - accuracy: 0.9537
Epoch 9/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1502 - accuracy: 0.9564
Epoch 10/10
938/938 [==============================] - 2s 2ms/step - loss: 0.1423 - accuracy: 0.9586
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1444 - accuracy: 0.9579
Test accuracy: 0.9578999876976013
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24,kernel_size=(3,3)),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, (3,3)),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_5"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_5 (Flatten) (None, 900) 0
_________________________________________________________________
dense_13 (Dense) (None, 128) 115328
_________________________________________________________________
dense_14 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='adam', loss='sparse_categorical_crossentropy',
metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels)
###Output
1875/1875 [==============================] - 5s 2ms/step - loss: 0.1637 - accuracy: 0.9517
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0468 - accuracy: 0.9855
Test accuracy: 0.9854999780654907
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 100 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels,
logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO''')
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO''')
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO''')
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Requirement already satisfied: mitdeeplearning in /usr/local/lib/python3.6/dist-packages (0.1.2)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.19.4)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''output = activation(dot(input, kernel) + bias)'''
tf.keras.layers.Dense(128, activation='relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='sigmoid')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 4s 2ms/step - loss: 0.5866 - accuracy: 0.8393
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2138 - accuracy: 0.9391
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1568 - accuracy: 0.9551
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1263 - accuracy: 0.9640
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1078 - accuracy: 0.9700
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(x=test_images, y=test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1075 - accuracy: 0.9685
Test accuracy: 0.968500018119812
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24,3,activation='relu',input_shape=(28,28,1)),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(24,3,activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.sigmoid),
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_5 (Conv2D) (None, 11, 11, 24) 5208
_________________________________________________________________
max_pooling2d_5 (MaxPooling2 (None, 5, 5, 24) 0
_________________________________________________________________
flatten_2 (Flatten) (None, 600) 0
_________________________________________________________________
dense_4 (Dense) (None, 128) 76928
_________________________________________________________________
dense_5 (Dense) (None, 10) 1290
=================================================================
Total params: 83,666
Trainable params: 83,666
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
#(optimizer='adam', loss='binary_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
BATCH_SIZE = 64
EPOCHS = 5
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.7679 - accuracy: 0.7556
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0876 - accuracy: 0.9724
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0609 - accuracy: 0.9807
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0467 - accuracy: 0.9857
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0370 - accuracy: 0.9882
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
test_loss, test_acc = cnn_model.evaluate(x=test_images, y=test_labels)
print('Test accuracy:', test_acc)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0372 - accuracy: 0.9873
Test accuracy: 0.9872999787330627
Test accuracy: 0.9872999787330627
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
a=predictions[0].tolist()
a.index(max(a))
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = predictions[0].tolist().index(max(predictions[0]))
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 61 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
TensorFlow 2.x selected.
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 15.7MB/s
[?25hRequirement already satisfied: numpy in /tensorflow-2.1.0/python3.6 (from mitdeeplearning) (1.18.1)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.28.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.15.6)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.10)
Requirement already satisfied: six in /tensorflow-2.1.0/python3.6 (from gym->mitdeeplearning) (1.14.0)
Requirement already satisfied: scipy in /tensorflow-2.1.0/python3.6 (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle~=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.2.2)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=0299eb306304570c0bc8bfad1df201437bcb8abe2312406c15048a12afa835a3
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 1s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
#'''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation= tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 2s 41us/sample - loss: 0.3712 - accuracy: 0.8969
Epoch 2/5
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1986 - accuracy: 0.9439
Epoch 3/5
60000/60000 [==============================] - 2s 35us/sample - loss: 0.1482 - accuracy: 0.9573
Epoch 4/5
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1186 - accuracy: 0.9657
Epoch 5/5
60000/60000 [==============================] - 2s 40us/sample - loss: 0.0998 - accuracy: 0.9718
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels) # TODO
print('Test accuracy:', test_acc)
###Output
10000/10000 [==============================] - 1s 60us/sample - loss: 0.1042 - accuracy: 0.9688
Test accuracy: 0.9688
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(16, 2, activation=tf.nn.relu), #'''TODO'''
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)), # '''TODO'''
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(64, 2,activation=tf.nn.relu), #'''TODO'''
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)), #'''TODO'''
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
#'''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation = tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_6 (Conv2D) multiple 80
_________________________________________________________________
max_pooling2d_6 (MaxPooling2 multiple 0
_________________________________________________________________
conv2d_7 (Conv2D) multiple 4160
_________________________________________________________________
max_pooling2d_7 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_8 (Flatten) multiple 0
_________________________________________________________________
dense_15 (Dense) multiple 295040
_________________________________________________________________
dense_16 (Dense) multiple 1290
=================================================================
Total params: 300,570
Trainable params: 300,570
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
#optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
# loss='sparse_categorical_crossentropy',
# metrics=['accuracy']
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
BATCH_SIZE = 128
EPOCHS = 10
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)#'''TODO''')
###Output
Train on 60000 samples
Epoch 1/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0488 - accuracy: 0.9845
Epoch 2/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0338 - accuracy: 0.9891
Epoch 3/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0285 - accuracy: 0.9912
Epoch 4/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0244 - accuracy: 0.9925
Epoch 5/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0215 - accuracy: 0.9935
Epoch 6/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0178 - accuracy: 0.9944
Epoch 7/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0166 - accuracy: 0.9948
Epoch 8/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0140 - accuracy: 0.9961
Epoch 9/10
60000/60000 [==============================] - 2s 34us/sample - loss: 0.0121 - accuracy: 0.9966
Epoch 10/10
60000/60000 [==============================] - 2s 36us/sample - loss: 0.0107 - accuracy: 0.9969
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels, verbose=False)# TODO
print('Test accuracy:', test_acc)
###Output
Test accuracy: 0.9919
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
#prediction= np.where(predictions[0] == np.amax(predictions[0]))
prediction = np.argmax(prediction[0])
print(prediction)
###Output
0
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 42 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)# TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)# TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
1.5 ConclusionIn this part of the lab, you had the chance to play with different MNIST classifiers with different architectures (fully-connected layers only, CNN), and experiment with how different hyperparameters affect accuracy (learning rate, etc.). The next part of the lab explores another application of CNNs, facial detection, and some drawbacks of AI systems in real world applications, like issues of bias.
###Code
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
Downloading mitdeeplearning-0.2.0.tar.gz (2.1 MB)
[?25l
[K |▏ | 10 kB 20.4 MB/s eta 0:00:01
[K |▎ | 20 kB 23.5 MB/s eta 0:00:01
[K |▌ | 30 kB 26.4 MB/s eta 0:00:01
[K |▋ | 40 kB 25.6 MB/s eta 0:00:01
[K |▉ | 51 kB 15.3 MB/s eta 0:00:01
[K |█ | 61 kB 12.9 MB/s eta 0:00:01
[K |█ | 71 kB 12.1 MB/s eta 0:00:01
[K |█▎ | 81 kB 13.1 MB/s eta 0:00:01
[K |█▍ | 92 kB 13.8 MB/s eta 0:00:01
[K |█▋ | 102 kB 11.4 MB/s eta 0:00:01
[K |█▊ | 112 kB 11.4 MB/s eta 0:00:01
[K |█▉ | 122 kB 11.4 MB/s eta 0:00:01
[K |██ | 133 kB 11.4 MB/s eta 0:00:01
[K |██▏ | 143 kB 11.4 MB/s eta 0:00:01
[K |██▍ | 153 kB 11.4 MB/s eta 0:00:01
[K |██▌ | 163 kB 11.4 MB/s eta 0:00:01
[K |██▋ | 174 kB 11.4 MB/s eta 0:00:01
[K |██▉ | 184 kB 11.4 MB/s eta 0:00:01
[K |███ | 194 kB 11.4 MB/s eta 0:00:01
[K |███▏ | 204 kB 11.4 MB/s eta 0:00:01
[K |███▎ | 215 kB 11.4 MB/s eta 0:00:01
[K |███▌ | 225 kB 11.4 MB/s eta 0:00:01
[K |███▋ | 235 kB 11.4 MB/s eta 0:00:01
[K |███▊ | 245 kB 11.4 MB/s eta 0:00:01
[K |████ | 256 kB 11.4 MB/s eta 0:00:01
[K |████ | 266 kB 11.4 MB/s eta 0:00:01
[K |████▎ | 276 kB 11.4 MB/s eta 0:00:01
[K |████▍ | 286 kB 11.4 MB/s eta 0:00:01
[K |████▌ | 296 kB 11.4 MB/s eta 0:00:01
[K |████▊ | 307 kB 11.4 MB/s eta 0:00:01
[K |████▉ | 317 kB 11.4 MB/s eta 0:00:01
[K |█████ | 327 kB 11.4 MB/s eta 0:00:01
[K |█████▏ | 337 kB 11.4 MB/s eta 0:00:01
[K |█████▎ | 348 kB 11.4 MB/s eta 0:00:01
[K |█████▌ | 358 kB 11.4 MB/s eta 0:00:01
[K |█████▋ | 368 kB 11.4 MB/s eta 0:00:01
[K |█████▉ | 378 kB 11.4 MB/s eta 0:00:01
[K |██████ | 389 kB 11.4 MB/s eta 0:00:01
[K |██████▏ | 399 kB 11.4 MB/s eta 0:00:01
[K |██████▎ | 409 kB 11.4 MB/s eta 0:00:01
[K |██████▍ | 419 kB 11.4 MB/s eta 0:00:01
[K |██████▋ | 430 kB 11.4 MB/s eta 0:00:01
[K |██████▊ | 440 kB 11.4 MB/s eta 0:00:01
[K |███████ | 450 kB 11.4 MB/s eta 0:00:01
[K |███████ | 460 kB 11.4 MB/s eta 0:00:01
[K |███████▏ | 471 kB 11.4 MB/s eta 0:00:01
[K |███████▍ | 481 kB 11.4 MB/s eta 0:00:01
[K |███████▌ | 491 kB 11.4 MB/s eta 0:00:01
[K |███████▊ | 501 kB 11.4 MB/s eta 0:00:01
[K |███████▉ | 512 kB 11.4 MB/s eta 0:00:01
[K |████████ | 522 kB 11.4 MB/s eta 0:00:01
[K |████████▏ | 532 kB 11.4 MB/s eta 0:00:01
[K |████████▎ | 542 kB 11.4 MB/s eta 0:00:01
[K |████████▌ | 552 kB 11.4 MB/s eta 0:00:01
[K |████████▋ | 563 kB 11.4 MB/s eta 0:00:01
[K |████████▉ | 573 kB 11.4 MB/s eta 0:00:01
[K |█████████ | 583 kB 11.4 MB/s eta 0:00:01
[K |█████████ | 593 kB 11.4 MB/s eta 0:00:01
[K |█████████▎ | 604 kB 11.4 MB/s eta 0:00:01
[K |█████████▍ | 614 kB 11.4 MB/s eta 0:00:01
[K |█████████▋ | 624 kB 11.4 MB/s eta 0:00:01
[K |█████████▊ | 634 kB 11.4 MB/s eta 0:00:01
[K |█████████▉ | 645 kB 11.4 MB/s eta 0:00:01
[K |██████████ | 655 kB 11.4 MB/s eta 0:00:01
[K |██████████▏ | 665 kB 11.4 MB/s eta 0:00:01
[K |██████████▍ | 675 kB 11.4 MB/s eta 0:00:01
[K |██████████▌ | 686 kB 11.4 MB/s eta 0:00:01
[K |██████████▋ | 696 kB 11.4 MB/s eta 0:00:01
[K |██████████▉ | 706 kB 11.4 MB/s eta 0:00:01
[K |███████████ | 716 kB 11.4 MB/s eta 0:00:01
[K |███████████▏ | 727 kB 11.4 MB/s eta 0:00:01
[K |███████████▎ | 737 kB 11.4 MB/s eta 0:00:01
[K |███████████▍ | 747 kB 11.4 MB/s eta 0:00:01
[K |███████████▋ | 757 kB 11.4 MB/s eta 0:00:01
[K |███████████▊ | 768 kB 11.4 MB/s eta 0:00:01
[K |████████████ | 778 kB 11.4 MB/s eta 0:00:01
[K |████████████ | 788 kB 11.4 MB/s eta 0:00:01
[K |████████████▎ | 798 kB 11.4 MB/s eta 0:00:01
[K |████████████▍ | 808 kB 11.4 MB/s eta 0:00:01
[K |████████████▌ | 819 kB 11.4 MB/s eta 0:00:01
[K |████████████▊ | 829 kB 11.4 MB/s eta 0:00:01
[K |████████████▉ | 839 kB 11.4 MB/s eta 0:00:01
[K |█████████████ | 849 kB 11.4 MB/s eta 0:00:01
[K |█████████████▏ | 860 kB 11.4 MB/s eta 0:00:01
[K |█████████████▎ | 870 kB 11.4 MB/s eta 0:00:01
[K |█████████████▌ | 880 kB 11.4 MB/s eta 0:00:01
[K |█████████████▋ | 890 kB 11.4 MB/s eta 0:00:01
[K |█████████████▉ | 901 kB 11.4 MB/s eta 0:00:01
[K |██████████████ | 911 kB 11.4 MB/s eta 0:00:01
[K |██████████████ | 921 kB 11.4 MB/s eta 0:00:01
[K |██████████████▎ | 931 kB 11.4 MB/s eta 0:00:01
[K |██████████████▍ | 942 kB 11.4 MB/s eta 0:00:01
[K |██████████████▋ | 952 kB 11.4 MB/s eta 0:00:01
[K |██████████████▊ | 962 kB 11.4 MB/s eta 0:00:01
[K |███████████████ | 972 kB 11.4 MB/s eta 0:00:01
[K |███████████████ | 983 kB 11.4 MB/s eta 0:00:01
[K |███████████████▏ | 993 kB 11.4 MB/s eta 0:00:01
[K |███████████████▍ | 1.0 MB 11.4 MB/s eta 0:00:01
[K |███████████████▌ | 1.0 MB 11.4 MB/s eta 0:00:01
[K |███████████████▊ | 1.0 MB 11.4 MB/s eta 0:00:01
[K |███████████████▉ | 1.0 MB 11.4 MB/s eta 0:00:01
[K |████████████████ | 1.0 MB 11.4 MB/s eta 0:00:01
[K |████████████████▏ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |████████████████▎ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |████████████████▌ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |████████████████▋ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |████████████████▊ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |█████████████████ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |█████████████████ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |█████████████████▎ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |█████████████████▍ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |█████████████████▋ | 1.1 MB 11.4 MB/s eta 0:00:01
[K |█████████████████▊ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |█████████████████▉ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |██████████████████ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |██████████████████▏ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |██████████████████▍ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |██████████████████▌ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |██████████████████▋ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |██████████████████▉ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |███████████████████ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |███████████████████▏ | 1.2 MB 11.4 MB/s eta 0:00:01
[K |███████████████████▎ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |███████████████████▍ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |███████████████████▋ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |███████████████████▊ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |████████████████████ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |████████████████████ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |████████████████████▎ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |████████████████████▍ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |████████████████████▌ | 1.3 MB 11.4 MB/s eta 0:00:01
[K |████████████████████▊ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |████████████████████▉ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████▏ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████▎ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████▌ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████▋ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████▉ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████ | 1.4 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████▎ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████▍ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████▋ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████▊ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████▉ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████▏ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████▍ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████▌ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████▊ | 1.5 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████▉ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████▏ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████▎ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████▌ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████▋ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████▊ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████▎ | 1.6 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████▍ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████▌ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████▊ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████▉ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████▏ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████▍ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████▌ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████▋ | 1.7 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████▉ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████▏ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████▎ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████▍ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████▋ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████▊ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████▏ | 1.8 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████▍ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████▌ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████▊ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████▉ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████████ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████████▏ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████████▎ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████████▌ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████████▋ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |█████████████████████████████▉ | 1.9 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████████ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████████▎ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████████▍ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████████▋ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████████▊ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |██████████████████████████████▉ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████████ | 2.0 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████████▏| 2.0 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████████▍| 2.0 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████████▌| 2.1 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████████▊| 2.1 MB 11.4 MB/s eta 0:00:01
[K |███████████████████████████████▉| 2.1 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████████| 2.1 MB 11.4 MB/s eta 0:00:01
[K |████████████████████████████████| 2.1 MB 11.4 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.62.3)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-py3-none-any.whl size=2115442 sha256=489455447acfd4b27d731fad01c8cf05a0616631df5f0b36e00d7227a3b4e191
Stored in directory: /root/.cache/pip/wheels/9a/b9/4f/99b7c8c5c75355550b83e1fcfc02956fb40c35eb01e2262877
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= 'relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation= 'softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 3ms/step - loss: 0.3699 - accuracy: 0.8962
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1967 - accuracy: 0.9447
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1474 - accuracy: 0.9577
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1186 - accuracy: 0.9662
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1002 - accuracy: 0.9713
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images,test_labels,batch_size=BATCH_SIZE)
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 1s 3ms/step - loss: 0.1026 - accuracy: 0.9695
Test accuracy: 0.9695000052452087
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24,kernel_size=(3,3),activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36,kernel_size=(3,3),activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) (None, 26, 26, 24) 240
max_pooling2d_2 (MaxPooling (None, 13, 13, 24) 0
2D)
conv2d_3 (Conv2D) (None, 11, 11, 36) 7812
max_pooling2d_3 (MaxPooling (None, 5, 5, 36) 0
2D)
flatten_2 (Flatten) (None, 900) 0
dense_4 (Dense) (None, 128) 115328
dense_5 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 7s 6ms/step - loss: 0.2679 - accuracy: 0.9137
Epoch 2/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0658 - accuracy: 0.9793
Epoch 3/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0468 - accuracy: 0.9855
Epoch 4/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0367 - accuracy: 0.9886
Epoch 5/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0294 - accuracy: 0.9907
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images,test_labels,batch_size=BATCH_SIZE)
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 1s 4ms/step - loss: 0.0332 - accuracy: 0.9888
Test accuracy: 0.9887999892234802
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = max(predictions[0])
print(prediction)
###Output
0.99999774
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 94 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 14.1MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.4)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.2)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=cdeee0bf33b14dd1aa8a89ca1f213858ef2b99ccafd34912a379f8de42cd6a46
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= "relu"),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation= "sigmoid")
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.4467 - accuracy: 0.8766
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.2191 - accuracy: 0.9373
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1652 - accuracy: 0.9527
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1337 - accuracy: 0.9610
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1132 - accuracy: 0.9681
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(
x=test_images, y=test_labels
)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1158 - accuracy: 0.9661
Test accuracy: 0.9660999774932861
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(2, 3),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(2,3),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 20
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple 0
_________________________________________________________________
conv2d_1 (Conv2D) multiple 38
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_1 (Flatten) multiple 0
_________________________________________________________________
dense_2 (Dense) multiple 6528
_________________________________________________________________
dense_3 (Dense) multiple 1290
=================================================================
Total params: 7,876
Trainable params: 7,876
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss="sparse_categorical_crossentropy", metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels)
###Output
1875/1875 [==============================] - 4s 2ms/step - loss: 0.1897 - accuracy: 0.9407
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1273 - accuracy: 0.9605
Test accuracy: 0.9605000019073486
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0]) # TODO
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 43 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)# TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/9d/ad/650eb53c0d9d1213536fe94bc150f89b564ff5ee784bd662272584bb091b/mitdeeplearning-0.2.0.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 18.7MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-cp37-none-any.whl size=2115442 sha256=7ae055a8a3f01cc518e8b08b09f1be1a8098e85f9e2ec9d2d9270383532a1b84
Stored in directory: /root/.cache/pip/wheels/af/dc/2a/5c3633135e7e4ef4fd31463cfa1942cb1bae7486ab94e7a2ad
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
#mnist.load_data() returns tuples of np arrays
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= "relu"),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation="softmax")
#softmax for categorizing
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-2),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
print('done')
###Output
done
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
#input data, target data, batch_size (bounciness), no. iterations over dataset
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
#noting how accuracy improves over epochs
#slower learning rate is better! About 98.38% accuracy for 1e-2
#however, the effect plateaus: 1e-5 yields 98.22% accuracy. Variance in another
#trial will likely yield a different answer, but this is probably due to the
#limitations of the model itself.
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0607 - accuracy: 0.9827
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0559 - accuracy: 0.9830
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0590 - accuracy: 0.9826
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0550 - accuracy: 0.9836
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0541 - accuracy: 0.9838
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
#some data is kept to evaluate on unseen data
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1026 - accuracy: 0.9709
Test accuracy: 0.9708999991416931
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
#assuming 3 x 3 "window"
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 900) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 115328
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-2), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=64, epochs=5)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0218 - accuracy: 0.9933
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0164 - accuracy: 0.9951
Epoch 3/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0151 - accuracy: 0.9955
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0120 - accuracy: 0.9963
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0134 - accuracy: 0.9960
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
#Adamax is achieving promising accuracy of about 99%
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0546 - accuracy: 0.9881
Test accuracy: 0.988099992275238
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 89 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Requirement already satisfied: mitdeeplearning in /usr/local/lib/python3.6/dist-packages (0.1.2)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.2)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.5)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation="relu"),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation="softmax")
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.Adam(),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.3008 - accuracy: 0.9143
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.1329 - accuracy: 0.9623
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0934 - accuracy: 0.9730
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0713 - accuracy: 0.9796
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0559 - accuracy: 0.9837
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0791 - accuracy: 0.9758
Test accuracy: 0.9757999777793884
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, 3, input_shape=(28, 28, 1)),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, 3),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation="softmax")
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_1"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 5, 5, 36) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 900) 0
_________________________________________________________________
dense_2 (Dense) (None, 128) 115328
_________________________________________________________________
dense_3 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 3ms/step - loss: 0.1702 - accuracy: 0.9503
Epoch 2/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0517 - accuracy: 0.9841
Epoch 3/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0343 - accuracy: 0.9894
Epoch 4/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0250 - accuracy: 0.9922
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.0191 - accuracy: 0.9939
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0402 - accuracy: 0.9876
Test accuracy: 0.9876000285148621
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 88 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 9.6MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.19.4)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.41.1)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114587 sha256=99ead65c27abedcc71aabb6f82214c24517b13ada2f4b8faf9d2dee585bc2276
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation='relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation="sigmoid")
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=3e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0461 - accuracy: 0.9867
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0369 - accuracy: 0.9884
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0274 - accuracy: 0.9919
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0269 - accuracy: 0.9922
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0200 - accuracy: 0.9947
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels, batch_size=BATCH_SIZE)
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 0.0771 - accuracy: 0.9760
Test accuracy: 0.9760000109672546
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, (3, 3), padding='valid', activation='relu'),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D((2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, (3, 3), padding='valid', activation='relu'),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation='sigmoid')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_28"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_52 (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d_52 (MaxPooling (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_53 (Conv2D) (None, 11, 11, 36) 7812
_________________________________________________________________
max_pooling2d_53 (MaxPooling (None, 5, 5, 36) 0
_________________________________________________________________
flatten_28 (Flatten) (None, 900) 0
_________________________________________________________________
dense_56 (Dense) (None, 128) 115328
_________________________________________________________________
dense_57 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.4271 - accuracy: 0.8717
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0594 - accuracy: 0.9820
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0399 - accuracy: 0.9872
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0315 - accuracy: 0.9901
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0232 - accuracy: 0.9929
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels, batch_size=BATCH_SIZE)
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 0s 2ms/step - loss: 0.0376 - accuracy: 0.9875
Test accuracy: 0.987500011920929
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
WARNING:tensorflow:8 out of the last 8 calls to <function Model.make_predict_function.<locals>.predict_function at 0x7f0cc8150c80> triggered tf.function retracing. Tracing is expensive and the excessive number of tracings could be due to (1) creating @tf.function repeatedly in a loop, (2) passing tensors with different shapes, (3) passing Python objects instead of tensors. For (1), please define your @tf.function outside of the loop. For (2), @tf.function has experimental_relax_shapes=True option that relaxes argument shapes that can avoid unnecessary retracing. For (3), please refer to https://www.tensorflow.org/guide/function#controlling_retracing and https://www.tensorflow.org/api_docs/python/tf/function for more details.
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 10 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.Adam(learning_rate=1e-4) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
Downloading mitdeeplearning-0.2.0.tar.gz (2.1 MB)
[K |████████████████████████████████| 2.1 MB 5.4 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.62.3)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-py3-none-any.whl size=2115442 sha256=2ed1df6c714143169c18785b13cc1a10ab14f91dd67f895feb82ea6f59cf0a77
Stored in directory: /root/.cache/pip/wheels/9a/b9/4f/99b7c8c5c75355550b83e1fcfc02956fb40c35eb01e2262877
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation='relu'),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation='softmax')
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.3673 - accuracy: 0.8978
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1957 - accuracy: 0.9452
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1483 - accuracy: 0.9579
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1205 - accuracy: 0.9658
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1019 - accuracy: 0.9711
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(x=test_images, y=test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.1085 - accuracy: 0.9664
Test accuracy: 0.9664000272750854
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), input_shape=(28,28,1)),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D((2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(36, (3,3)),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10,activation='softmax')
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_7"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_4 (Conv2D) (None, 26, 26, 24) 240
max_pooling2d_4 (MaxPooling (None, 13, 13, 24) 0
2D)
conv2d_5 (Conv2D) (None, 11, 11, 36) 7812
max_pooling2d_5 (MaxPooling (None, 5, 5, 36) 0
2D)
flatten_7 (Flatten) (None, 900) 0
dense_14 (Dense) (None, 128) 115328
dense_15 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 6s 6ms/step - loss: 0.2212 - accuracy: 0.9316
Epoch 2/5
938/938 [==============================] - 6s 6ms/step - loss: 0.0667 - accuracy: 0.9793
Epoch 3/5
938/938 [==============================] - 5s 6ms/step - loss: 0.0461 - accuracy: 0.9852
Epoch 4/5
938/938 [==============================] - 5s 6ms/step - loss: 0.0344 - accuracy: 0.9891
Epoch 5/5
938/938 [==============================] - 5s 6ms/step - loss: 0.0268 - accuracy: 0.9915
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.1085 - accuracy: 0.9664
Test accuracy: 0.9664000272750854
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = tf.math.argmax(predictions[0])
print(prediction)
###Output
tf.Tensor(7, shape=(), dtype=int64)
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 87 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits, from_logits=True) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
Downloading mitdeeplearning-0.2.0.tar.gz (2.1 MB)
[K |████████████████████████████████| 2.1 MB 4.1 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.62.3)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-py3-none-any.whl size=2115442 sha256=0e9f6b41af138df346e55d93b4b27f4fb6fbb6efc89937c6aa998ce3c5b80fb1
Stored in directory: /root/.cache/pip/wheels/9a/b9/4f/99b7c8c5c75355550b83e1fcfc02956fb40c35eb01e2262877
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= "relu"),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation="softmax")
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 3ms/step - loss: 0.3645 - accuracy: 0.8974
Epoch 2/5
938/938 [==============================] - 2s 3ms/step - loss: 0.1992 - accuracy: 0.9433
Epoch 3/5
938/938 [==============================] - 2s 3ms/step - loss: 0.1518 - accuracy: 0.9568
Epoch 4/5
938/938 [==============================] - 2s 3ms/step - loss: 0.1231 - accuracy: 0.9647
Epoch 5/5
938/938 [==============================] - 2s 3ms/step - loss: 0.1030 - accuracy: 0.9702
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.1023 - accuracy: 0.9676
Test accuracy: 0.9675999879837036
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(24, 3, activation="relu"),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D((2, 2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(24, 3, activation="relu"),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D((2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation="softmax")
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_6"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_10 (Conv2D) (None, 26, 26, 24) 240
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 13, 13, 24) 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 11, 11, 24) 5208
_________________________________________________________________
max_pooling2d_11 (MaxPooling (None, 5, 5, 24) 0
_________________________________________________________________
flatten_6 (Flatten) (None, 600) 0
_________________________________________________________________
dense_12 (Dense) (None, 128) 76928
_________________________________________________________________
dense_13 (Dense) (None, 10) 1290
=================================================================
Total params: 83,666
Trainable params: 83,666
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 5ms/step - loss: 0.0057 - accuracy: 0.9983
Epoch 2/5
938/938 [==============================] - 5s 5ms/step - loss: 0.0029 - accuracy: 0.9992
Epoch 3/5
938/938 [==============================] - 5s 5ms/step - loss: 0.0023 - accuracy: 0.9995
Epoch 4/5
938/938 [==============================] - 5s 5ms/step - loss: 0.0017 - accuracy: 0.9997
Epoch 5/5
938/938 [==============================] - 5s 5ms/step - loss: 0.0014 - accuracy: 0.9996
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.0313 - accuracy: 0.9917
Test accuracy: 0.9916999936103821
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions)
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model.call(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Run in Google Colab Copyright Information
###Code
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Collecting mitdeeplearning
Downloading mitdeeplearning-0.2.0.tar.gz (2.1 MB)
[K |████████████████████████████████| 2.1 MB 5.3 MB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (1.19.5)
Requirement already satisfied: regex in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (4.62.3)
Requirement already satisfied: gym in /usr/local/lib/python3.7/dist-packages (from mitdeeplearning) (0.17.3)
Requirement already satisfied: cloudpickle<1.7.0,>=1.2.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.7/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.7/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.2.0-py3-none-any.whl size=2115442 sha256=203fd8da218480bd5e7fe624befb9e8e60f778fbef3c567f23943fdf9e588ab6
Stored in directory: /root/.cache/pip/wheels/9a/b9/4f/99b7c8c5c75355550b83e1fcfc02956fb40c35eb01e2262877
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.2.0
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
11501568/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(units=10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 10
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/10
938/938 [==============================] - 3s 3ms/step - loss: 0.2633 - accuracy: 0.9226
Epoch 2/10
938/938 [==============================] - 3s 3ms/step - loss: 0.1194 - accuracy: 0.9651
Epoch 3/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0854 - accuracy: 0.9756
Epoch 4/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0653 - accuracy: 0.9812
Epoch 5/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0536 - accuracy: 0.9846
Epoch 6/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0446 - accuracy: 0.9870
Epoch 7/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0373 - accuracy: 0.9895
Epoch 8/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0311 - accuracy: 0.9918
Epoch 9/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0270 - accuracy: 0.9930
Epoch 10/10
938/938 [==============================] - 3s 3ms/step - loss: 0.0227 - accuracy: 0.9948
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 3ms/step - loss: 0.0598 - accuracy: 0.9810
Test accuracy: 0.9810000061988831
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
from tensorflow.python.ops.gen_nn_ops import softmax
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), strides=(1,1), padding='valid', activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2), padding='valid'),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), strides=(1,1), padding='valid', activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2), strides=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_17"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_8 (Conv2D) (None, 26, 26, 24) 240
max_pooling2d_6 (MaxPooling (None, 13, 13, 24) 0
2D)
conv2d_9 (Conv2D) (None, 11, 11, 36) 7812
max_pooling2d_7 (MaxPooling (None, 5, 5, 36) 0
2D)
flatten_25 (Flatten) (None, 900) 0
dense_34 (Dense) (None, 128) 115328
dense_35 (Dense) (None, 10) 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adagrad(learning_rate=1e-1),
loss=tf.keras.losses.sparse_categorical_crossentropy, metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, BATCH_SIZE, EPOCHS)
###Output
Epoch 1/10
938/938 [==============================] - 8s 7ms/step - loss: 0.1845 - accuracy: 0.9424
Epoch 2/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0455 - accuracy: 0.9859
Epoch 3/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0303 - accuracy: 0.9907
Epoch 4/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0233 - accuracy: 0.9924
Epoch 5/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0169 - accuracy: 0.9947
Epoch 6/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0127 - accuracy: 0.9961
Epoch 7/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0098 - accuracy: 0.9971
Epoch 8/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0078 - accuracy: 0.9977
Epoch 9/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0062 - accuracy: 0.9983
Epoch 10/10
938/938 [==============================] - 6s 7ms/step - loss: 0.0044 - accuracy: 0.9990
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(test_images, test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 4ms/step - loss: 0.0464 - accuracy: 0.9871
Test accuracy: 0.9871000051498413
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for some images in the test dataset:
###Code
predictions[:10]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = tf.argmax(predictions[0])
print(prediction)
###Output
tf.Tensor(7, shape=(), dtype=int64)
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels, logits, from_logits=True) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value, cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
TensorFlow 2.x selected.
Collecting mitdeeplearning
[?25l Downloading https://files.pythonhosted.org/packages/8b/3b/b9174b68dc10832356d02a2d83a64b43a24f1762c172754407d22fc8f960/mitdeeplearning-0.1.2.tar.gz (2.1MB)
[K |████████████████████████████████| 2.1MB 110kB/s
[?25hRequirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.2)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.38.0)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.12.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
Building wheels for collected packages: mitdeeplearning
Building wheel for mitdeeplearning (setup.py) ... [?25l[?25hdone
Created wheel for mitdeeplearning: filename=mitdeeplearning-0.1.2-cp36-none-any.whl size=2114586 sha256=51d5d6174a6c9f5f3ecb5a59410742dd8c68759a33bf401ca7e19c9eba7ddc0b
Stored in directory: /root/.cache/pip/wheels/27/e1/73/5f01c787621d8a3c857f59876c79e304b9b64db9ff5bd61b74
Successfully built mitdeeplearning
Installing collected packages: mitdeeplearning
Successfully installed mitdeeplearning-0.1.2
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
Downloading data from https://storage.googleapis.com/tensorflow/tf-keras-datasets/mnist.npz
11493376/11490434 [==============================] - 0s 0us/step
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 68us/sample - loss: 0.3668 - accuracy: 0.8981
Epoch 2/5
60000/60000 [==============================] - 2s 39us/sample - loss: 0.1962 - accuracy: 0.9444
Epoch 3/5
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1485 - accuracy: 0.9570
Epoch 4/5
60000/60000 [==============================] - 2s 36us/sample - loss: 0.1216 - accuracy: 0.9657
Epoch 5/5
60000/60000 [==============================] - 2s 37us/sample - loss: 0.1026 - accuracy: 0.9703
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(x=test_images,y=test_labels,batch_size=BATCH_SIZE)
print('Test accuracy:', test_acc)
###Output
10000/10000 [==============================] - 0s 37us/sample - loss: 0.1055 - accuracy: 0.9681
Test accuracy: 0.9681
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters = 24,kernel_size=(3,3),activation = tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size = (2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters = 36, kernel_size = (3,3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_2 (Conv2D) multiple 240
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 multiple 0
_________________________________________________________________
conv2d_3 (Conv2D) multiple 7812
_________________________________________________________________
max_pooling2d_3 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_2 (Flatten) multiple 0
_________________________________________________________________
dense_4 (Dense) multiple 115328
_________________________________________________________________
dense_5 (Dense) multiple 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=0.01) , loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(x = train_images, y= train_labels,batch_size=BATCH_SIZE,epochs=EPOCHS)
###Output
Train on 60000 samples
Epoch 1/5
60000/60000 [==============================] - 4s 62us/sample - loss: 0.7445 - accuracy: 0.7860
Epoch 2/5
60000/60000 [==============================] - 3s 51us/sample - loss: 0.2026 - accuracy: 0.9384
Epoch 3/5
60000/60000 [==============================] - 3s 50us/sample - loss: 0.1426 - accuracy: 0.9563
Epoch 4/5
60000/60000 [==============================] - 3s 52us/sample - loss: 0.1145 - accuracy: 0.9650
Epoch 5/5
60000/60000 [==============================] - 3s 53us/sample - loss: 0.0978 - accuracy: 0.9704
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = cnn_model.evaluate(x=test_images,y=test_labels,batch_size=BATCH_SIZE)
print('Test accuracy:', test_acc)
###Output
10000/10000 [==============================] - 0s 42us/sample - loss: 0.0852 - accuracy: 0.9717
Test accuracy: 0.9717
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
nums = np.array(range(0,len(predictions[0])))
prediction = nums[predictions[0]==max(predictions[0])]
print(prediction)
###Output
[7]
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 100 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(target = labels,output=logits,from_logits=True) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value,cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2021 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation= '''TODO'''),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
'''TODO: Dense layer to output classification probabilities'''
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
_____no_output_____
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D('''TODO'''),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
'''TODO: Dense layer to output classification probabilities'''
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
_____no_output_____
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer='''TODO''', loss='''TODO''', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit('''TODO''')
###Output
_____no_output_____
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = # TODO
print('Test accuracy:', test_acc)
###Output
_____no_output_____
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = # TODO
print(prediction)
###Output
_____no_output_____
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
_____no_output_____
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2022 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
#%tensorflow_version 2.x
import tensorflow as tf
#!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
_____no_output_____
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
#'''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(128, activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model.** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 5s 6ms/step - loss: 0.4299 - accuracy: 0.8817
Epoch 2/5
938/938 [==============================] - 5s 5ms/step - loss: 0.2194 - accuracy: 0.9376
Epoch 3/5
938/938 [==============================] - 5s 5ms/step - loss: 0.1639 - accuracy: 0.9537
Epoch 4/5
938/938 [==============================] - 5s 5ms/step - loss: 0.1322 - accuracy: 0.9625
Epoch 5/5
938/938 [==============================] - 5s 5ms/step - loss: 0.1107 - accuracy: 0.9682
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(
x=test_images,
y=test_labels,
batch_size=BATCH_SIZE)#,
#verbose=1,
#sample_weight=None,
#steps=None,
#callbacks=None,
#max_queue_size=10,
#workers=1,
#use_multiprocessing=False,
#return_dict=False,
#**kwargs
#)
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694
Test accuracy: 0.9693999886512756
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the first max pooling layer
##tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
# TODO: Define the second convolutional layer
##tf.keras.layers.Conv2D('''TODO'''),
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
##tf.keras.layers.MaxPool2D('''TODO'''),
tf.keras.layers.MaxPooling2D(pool_size=(2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
#'''TODO: Dense layer to output classification probabilities'''
tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
2022-03-28 14:34:43.418149: I tensorflow/stream_executor/cuda/cuda_dnn.cc:368] Loaded cuDNN version 8303
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 7s 7ms/step - loss: 0.1806 - accuracy: 0.9467
Epoch 2/5
938/938 [==============================] - 6s 7ms/step - loss: 0.0578 - accuracy: 0.9819
Epoch 3/5
938/938 [==============================] - 6s 7ms/step - loss: 0.0395 - accuracy: 0.9878
Epoch 4/5
938/938 [==============================] - 7s 7ms/step - loss: 0.0300 - accuracy: 0.9906
Epoch 5/5
938/938 [==============================] - 7s 7ms/step - loss: 0.0232 - accuracy: 0.9924
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(
x=test_images,
y=test_labels,
batch_size=BATCH_SIZE)
print('Test accuracy:', test_acc)
###Output
157/157 [==============================] - 1s 5ms/step - loss: 0.1066 - accuracy: 0.9694
Test accuracy: 0.9693999886512756
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 79 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are grey. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = # TODO
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy() # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = # TODO
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____
###Markdown
Visit MIT Deep Learning Run in Google Colab View Source on GitHub Copyright Information
###Code
# Copyright 2020 MIT 6.S191 Introduction to Deep Learning. All Rights Reserved.
#
# Licensed under the MIT License. You may not use this file except in compliance
# with the License. Use and/or modification of this code outside of 6.S191 must
# reference:
#
# © MIT 6.S191: Introduction to Deep Learning
# http://introtodeeplearning.com
#
###Output
_____no_output_____
###Markdown
Laboratory 2: Computer Vision Part 1: MNIST Digit ClassificationIn the first portion of this lab, we will build and train a convolutional neural network (CNN) for classification of handwritten digits from the famous [MNIST](http://yann.lecun.com/exdb/mnist/) dataset. The MNIST dataset consists of 60,000 training images and 10,000 test images. Our classes are the digits 0-9.First, let's download the course repository, install dependencies, and import the relevant packages we'll need for this lab.
###Code
# Import Tensorflow 2.0
%tensorflow_version 2.x
import tensorflow as tf
!pip install mitdeeplearning
import mitdeeplearning as mdl
import matplotlib.pyplot as plt
import numpy as np
import random
from tqdm import tqdm
# Check that we are using a GPU, if not switch runtimes
# using Runtime > Change Runtime Type > GPU
assert len(tf.config.list_physical_devices('GPU')) > 0
###Output
Requirement already satisfied: mitdeeplearning in /usr/local/lib/python3.6/dist-packages (0.1.2)
Requirement already satisfied: regex in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (2019.12.20)
Requirement already satisfied: tqdm in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (4.38.0)
Requirement already satisfied: numpy in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (1.18.3)
Requirement already satisfied: gym in /usr/local/lib/python3.6/dist-packages (from mitdeeplearning) (0.17.1)
Requirement already satisfied: six in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.12.0)
Requirement already satisfied: cloudpickle<1.4.0,>=1.2.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.3.0)
Requirement already satisfied: pyglet<=1.5.0,>=1.4.0 in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.5.0)
Requirement already satisfied: scipy in /usr/local/lib/python3.6/dist-packages (from gym->mitdeeplearning) (1.4.1)
Requirement already satisfied: future in /usr/local/lib/python3.6/dist-packages (from pyglet<=1.5.0,>=1.4.0->gym->mitdeeplearning) (0.16.0)
###Markdown
1.1 MNIST dataset Let's download and load the dataset and display a few random samples from it:
###Code
mnist = tf.keras.datasets.mnist
(train_images, train_labels), (test_images, test_labels) = mnist.load_data()
train_images = (np.expand_dims(train_images, axis=-1)/255.).astype(np.float32)
train_labels = (train_labels).astype(np.int64)
test_images = (np.expand_dims(test_images, axis=-1)/255.).astype(np.float32)
test_labels = (test_labels).astype(np.int64)
###Output
_____no_output_____
###Markdown
Our training set is made up of 28x28 grayscale images of handwritten digits. Let's visualize what some of these images and their corresponding training labels look like.
###Code
plt.figure(figsize=(10,10))
random_inds = np.random.choice(60000,36)
for i in range(36):
plt.subplot(6,6,i+1)
plt.xticks([])
plt.yticks([])
plt.grid(False)
image_ind = random_inds[i]
plt.imshow(np.squeeze(train_images[image_ind]), cmap=plt.cm.binary)
plt.xlabel(train_labels[image_ind])
###Output
_____no_output_____
###Markdown
1.2 Neural Network for Handwritten Digit ClassificationWe'll first build a simple neural network consisting of two fully connected layers and apply this to the digit classification task. Our network will ultimately output a probability distribution over the 10 digit classes (0-9). This first architecture we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/mnist_2layers_arch.png "CNN Architecture for MNIST Classification") Fully connected neural network architectureTo define the architecture of this first fully connected neural network, we'll once again use the Keras API and define the model using the [`Sequential`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequential) class. Note how we first use a [`Flatten`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) layer, which flattens the input so that it can be fed into the model. In this next block, you'll define the fully connected layers of this simple work.
###Code
def build_fc_model():
fc_model = tf.keras.Sequential([
# First define a Flatten layer
tf.keras.layers.Flatten(),
# '''TODO: Define the activation function for the first fully connected (Dense) layer.'''
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# '''TODO: Define the second Dense layer to output the classification probabilities'''
tf.keras.layers.Dense(10,activation=tf.nn.softmax)
])
return fc_model
model = build_fc_model()
###Output
_____no_output_____
###Markdown
As we progress through this next portion, you may find that you'll want to make changes to the architecture defined above. **Note that in order to update the model later on, you'll need to re-run the above cell to re-initialize the model. ** Let's take a step back and think about the network we've just created. The first layer in this network, `tf.keras.layers.Flatten`, transforms the format of the images from a 2d-array (28 x 28 pixels), to a 1d-array of 28 * 28 = 784 pixels. You can think of this layer as unstacking rows of pixels in the image and lining them up. There are no learned parameters in this layer; it only reformats the data.After the pixels are flattened, the network consists of a sequence of two `tf.keras.layers.Dense` layers. These are fully-connected neural layers. The first `Dense` layer has 128 nodes (or neurons). The second (and last) layer (which you've defined!) should return an array of probability scores that sum to 1. Each node contains a score that indicates the probability that the current image belongs to one of the handwritten digit classes.That defines our fully connected model! Compile the modelBefore training the model, we need to define a few more settings. These are added during the model's [`compile`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialcompile) step:* *Loss function* — This defines how we measure how accurate the model is during training. As was covered in lecture, during training we want to minimize this function, which will "steer" the model in the right direction.* *Optimizer* — This defines how the model is updated based on the data it sees and its loss function.* *Metrics* — Here we can define metrics used to monitor the training and testing steps. In this example, we'll look at the *accuracy*, the fraction of the images that are correctly classified.We'll start out by using a stochastic gradient descent (SGD) optimizer initialized with a learning rate of 0.1. Since we are performing a categorical classification task, we'll want to use the [cross entropy loss](https://www.tensorflow.org/api_docs/python/tf/keras/metrics/sparse_categorical_crossentropy).You'll want to experiment with both the choice of optimizer and learning rate and evaluate how these affect the accuracy of the trained model.
###Code
'''TODO: Experiment with different optimizers and learning rates. How do these affect
the accuracy of the trained model? Which optimizers and/or learning rates yield
the best performance?'''
model.compile(optimizer=tf.keras.optimizers.SGD(learning_rate=1e-1),
loss='sparse_categorical_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
Train the modelWe're now ready to train our model, which will involve feeding the training data (`train_images` and `train_labels`) into the model, and then asking it to learn the associations between images and labels. We'll also need to define the batch size and the number of epochs, or iterations over the MNIST dataset, to use during training. In Lab 1, we saw how we can use `GradientTape` to optimize losses and train models with stochastic gradient descent. After defining the model settings in the `compile` step, we can also accomplish training by calling the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) method on an instance of the `Model` class. We will use this to train our fully connected model
###Code
# Define the batch size and the number of epochs to use during training
BATCH_SIZE = 64
EPOCHS = 5
model.fit(train_images, train_labels, batch_size=BATCH_SIZE, epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0888 - accuracy: 0.9753
Epoch 2/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0784 - accuracy: 0.9780
Epoch 3/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0697 - accuracy: 0.9806
Epoch 4/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0633 - accuracy: 0.9822
Epoch 5/5
938/938 [==============================] - 2s 2ms/step - loss: 0.0574 - accuracy: 0.9839
###Markdown
As the model trains, the loss and accuracy metrics are displayed. With five epochs and a learning rate of 0.01, this fully connected model should achieve an accuracy of approximatley 0.97 (or 97%) on the training data. Evaluate accuracy on the test datasetNow that we've trained the model, we can ask it to make predictions about a test set that it hasn't seen before. In this example, the `test_images` array comprises our test dataset. To evaluate accuracy, we can check to see if the model's predictions match the labels from the `test_labels` array. Use the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method to evaluate the model on the test dataset!
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images,test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0815 - accuracy: 0.9750
Test accuracy: 0.9750000238418579
###Markdown
You may observe that the accuracy on the test dataset is a little lower than the accuracy on the training dataset. This gap between training accuracy and test accuracy is an example of *overfitting*, when a machine learning model performs worse on new data than on its training data. What is the highest accuracy you can achieve with this first fully connected model? Since the handwritten digit classification task is pretty straightforward, you may be wondering how we can do better...![Deeper...](https://i.kym-cdn.com/photos/images/newsfeed/000/534/153/f87.jpg) 1.3 Convolutional Neural Network (CNN) for handwritten digit classification As we saw in lecture, convolutional neural networks (CNNs) are particularly well-suited for a variety of tasks in computer vision, and have achieved near-perfect accuracies on the MNIST dataset. We will now build a CNN composed of two convolutional layers and pooling layers, followed by two fully connected layers, and ultimately output a probability distribution over the 10 digit classes (0-9). The CNN we will be building is depicted below:![alt_text](https://raw.githubusercontent.com/aamini/introtodeeplearning/master/lab2/img/convnet_fig.png "CNN Architecture for MNIST Classification") Define the CNN modelWe'll use the same training and test datasets as before, and proceed similarly as our fully connected network to define and train our new CNN model. To do this we will explore two layers we have not encountered before: you can use [`keras.layers.Conv2D` ](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D) to define convolutional layers and [`keras.layers.MaxPool2D`](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D) to define the pooling layers. Use the parameters shown in the network architecture above to define these layers and build the CNN model.
###Code
def build_cnn_model():
cnn_model = tf.keras.Sequential([
# TODO: Define the first convolutional layer
tf.keras.layers.Conv2D(filters=24, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the first max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
# TODO: Define the second convolutional layer
tf.keras.layers.Conv2D(filters=36, kernel_size=(3,3), activation=tf.nn.relu),
# TODO: Define the second max pooling layer
tf.keras.layers.MaxPool2D(pool_size=(2,2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(128, activation=tf.nn.relu),
# TODO: Define the last Dense layer to output the classification
# probabilities. Pay attention to the activation needed a probability
# output
tf.keras.layers.Dense(10, activation=tf.nn.softmax)
])
return cnn_model
cnn_model = build_cnn_model()
# Initialize the model by passing some data through
cnn_model.predict(train_images[[0]])
# Print the summary of the layers in the model.
print(cnn_model.summary())
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d (Conv2D) multiple 240
_________________________________________________________________
max_pooling2d (MaxPooling2D) multiple 0
_________________________________________________________________
conv2d_1 (Conv2D) multiple 7812
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 multiple 0
_________________________________________________________________
flatten_2 (Flatten) multiple 0
_________________________________________________________________
dense_4 (Dense) multiple 115328
_________________________________________________________________
dense_5 (Dense) multiple 1290
=================================================================
Total params: 124,670
Trainable params: 124,670
Non-trainable params: 0
_________________________________________________________________
None
###Markdown
Train and test the CNN modelNow, as before, we can define the loss function, optimizer, and metrics through the `compile` method. Compile the CNN model with an optimizer and learning rate of choice:
###Code
'''TODO: Define the compile operation with your optimizer and learning rate of choice'''
cnn_model.compile(optimizer=tf.keras.optimizers.Adam(learning_rate=1e-3), loss='sparse_categorical_crossentropy', metrics=['accuracy']) # TODO
###Output
_____no_output_____
###Markdown
As was the case with the fully connected model, we can train our CNN using the `fit` method via the Keras API.
###Code
'''TODO: Use model.fit to train the CNN model, with the same batch_size and number of epochs previously used.'''
cnn_model.fit(train_images,train_labels,batch_size=BATCH_SIZE,epochs=EPOCHS)
###Output
Epoch 1/5
938/938 [==============================] - 3s 3ms/step - loss: 0.1904 - accuracy: 0.9430
Epoch 2/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0529 - accuracy: 0.9838
Epoch 3/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0366 - accuracy: 0.9891
Epoch 4/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0276 - accuracy: 0.9912
Epoch 5/5
938/938 [==============================] - 3s 3ms/step - loss: 0.0212 - accuracy: 0.9936
###Markdown
Great! Now that we've trained the model, let's evaluate it on the test dataset using the [`evaluate`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialevaluate) method:
###Code
'''TODO: Use the evaluate method to test the model!'''
test_loss, test_acc = model.evaluate(test_images,test_labels)
print('Test accuracy:', test_acc)
###Output
313/313 [==============================] - 1s 2ms/step - loss: 0.0815 - accuracy: 0.9750
Test accuracy: 0.9750000238418579
###Markdown
What is the highest accuracy you're able to achieve using the CNN model, and how does the accuracy of the CNN model compare to the accuracy of the simple fully connected network? What optimizers and learning rates seem to be optimal for training the CNN model? Make predictions with the CNN modelWith the model trained, we can use it to make predictions about some images. The [`predict`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialpredict) function call generates the output predictions given a set of input samples.
###Code
predictions = cnn_model.predict(test_images)
###Output
_____no_output_____
###Markdown
With this function call, the model has predicted the label for each image in the testing set. Let's take a look at the prediction for the first image in the test dataset:
###Code
predictions[0]
###Output
_____no_output_____
###Markdown
As you can see, a prediction is an array of 10 numbers. Recall that the output of our model is a probability distribution over the 10 digit classes. Thus, these numbers describe the model's "confidence" that the image corresponds to each of the 10 different digits. Let's look at the digit that has the highest confidence for the first image in the test dataset:
###Code
'''TODO: identify the digit with the highest confidence prediction for the first
image in the test dataset. '''
prediction = np.argmax(predictions[0])
print(prediction)
###Output
7
###Markdown
So, the model is most confident that this image is a "???". We can check the test label (remember, this is the true identity of the digit) to see if this prediction is correct:
###Code
print("Label of this digit is:", test_labels[0])
plt.imshow(test_images[0,:,:,0], cmap=plt.cm.binary)
###Output
Label of this digit is: 7
###Markdown
It is! Let's visualize the classification results on the MNIST dataset. We will plot images from the test dataset along with their predicted label, as well as a histogram that provides the prediction probabilities for each of the digits:
###Code
#@title Change the slider to look at the model's predictions! { run: "auto" }
image_index = 99 #@param {type:"slider", min:0, max:100, step:1}
plt.subplot(1,2,1)
mdl.lab2.plot_image_prediction(image_index, predictions, test_labels, test_images)
plt.subplot(1,2,2)
mdl.lab2.plot_value_prediction(image_index, predictions, test_labels)
###Output
_____no_output_____
###Markdown
We can also plot several images along with their predictions, where correct prediction labels are blue and incorrect prediction labels are red. The number gives the percent confidence (out of 100) for the predicted label. Note the model can be very confident in an incorrect prediction!
###Code
# Plots the first X test images, their predicted label, and the true label
# Color correct predictions in blue, incorrect predictions in red
num_rows = 5
num_cols = 4
num_images = num_rows*num_cols
plt.figure(figsize=(2*2*num_cols, 2*num_rows))
for i in range(num_images):
plt.subplot(num_rows, 2*num_cols, 2*i+1)
mdl.lab2.plot_image_prediction(i, predictions, test_labels, test_images)
plt.subplot(num_rows, 2*num_cols, 2*i+2)
mdl.lab2.plot_value_prediction(i, predictions, test_labels)
###Output
_____no_output_____
###Markdown
1.4 Training the model 2.0Earlier in the lab, we used the [`fit`](https://www.tensorflow.org/api_docs/python/tf/keras/models/Sequentialfit) function call to train the model. This function is quite high-level and intuitive, which is really useful for simpler models. As you may be able to tell, this function abstracts away many details in the training call, and we have less control over training model, which could be useful in other contexts. As an alternative to this, we can use the [`tf.GradientTape`](https://www.tensorflow.org/api_docs/python/tf/GradientTape) class to record differentiation operations during training, and then call the [`tf.GradientTape.gradient`](https://www.tensorflow.org/api_docs/python/tf/GradientTapegradient) function to actually compute the gradients. You may recall seeing this in Lab 1 Part 1, but let's take another look at this here.We'll use this framework to train our `cnn_model` using stochastic gradient descent.
###Code
# Rebuild the CNN model
cnn_model = build_cnn_model()
batch_size = 12
loss_history = mdl.util.LossHistory(smoothing_factor=0.95) # to record the evolution of the loss
plotter = mdl.util.PeriodicPlotter(sec=2, xlabel='Iterations', ylabel='Loss', scale='semilogy')
optimizer = tf.keras.optimizers.SGD(learning_rate=1e-2) # define our optimizer
if hasattr(tqdm, '_instances'): tqdm._instances.clear() # clear if it exists
for idx in tqdm(range(0, train_images.shape[0], batch_size)):
# First grab a batch of training data and convert the input images to tensors
(images, labels) = (train_images[idx:idx+batch_size], train_labels[idx:idx+batch_size])
images = tf.convert_to_tensor(images, dtype=tf.float32)
# GradientTape to record differentiation operations
with tf.GradientTape() as tape:
#'''TODO: feed the images into the model and obtain the predictions'''
logits = cnn_model(images)
#'''TODO: compute the categorical cross entropy loss
loss_value = tf.keras.backend.sparse_categorical_crossentropy(labels,logits) # TODO
loss_history.append(loss_value.numpy().mean()) # append the loss to the loss_history record
plotter.plot(loss_history.get())
# Backpropagation
'''TODO: Use the tape to compute the gradient against all parameters in the CNN model.
Use cnn_model.trainable_variables to access these parameters.'''
grads = tape.gradient(loss_value,cnn_model.trainable_variables)
optimizer.apply_gradients(zip(grads, cnn_model.trainable_variables))
###Output
_____no_output_____ |
notebooks/zipcodes_import.ipynb | ###Markdown
zipcodes_import.ipynb README As noted in the master README this notebook's usage is generally not required unless the geo_zipcodes.db has to be created from scratch. Assuming that the data folder structure is still in place, this notebook should run as is without alteration of the config.yml file.
###Code
import json
import os
import sqlite3
import sys
from datetime import datetime
import logzero
import numpy as np
import pandas as pd
import yaml
from logzero import logger
sys.path.append("../source")
import queries
log_path = "logs/"
log_file = "zip_code_import.log"
logzero.logfile(log_path + log_file, maxBytes=1e6, backupCount=5, disableStderrLogger=True)
logger.info(f"{log_path}, {log_file}")
logger.info(sys.path)
try:
with open("../source/config.yml", "r") as config_in:
configs = yaml.load(config_in, Loader=yaml.SafeLoader)
logger.info(configs)
except:
logger.error(f"config file open failure.")
exit(1)
data_path = configs["file_paths"]["data_path_gaz"]
data_file = configs["file_names"]["data_file_gaz"]
db_path = configs["file_paths"]["downloads_path_db"]
db_file = configs["file_names"]["db_file_gzc"]
logger.info(f"{data_path}, {data_file}")
logger.info(f"{db_path}, {db_file}")
downloads_dir = os.path.isdir(configs["file_paths"]["downloads_path"])
if not downloads_dir:
os.makedirs(configs["file_paths"]["downloads_path"])
os.makedirs(configs["file_paths"]["downloads_path_db"])
logger.info(f"created downloads directory structure")
print(f"created downloads directory structure")
else:
logger.info(f"directory {configs['file_paths']['downloads_path']} present")
print(f"directory {configs['file_paths']['downloads_path']} present")
original = ["GEOID", "ALAND", "AWATER", "ALAND_SQMI", "AWATER_SQMI", "INTPTLAT", "INTPTLONG"]
header = [
"ZIPCODE",
"LAND_AREA_MSQ",
"WATER_AREA_MSQ",
"LAND_AREA_SQMI",
"WATER_AREA_SQMI",
"LAT_ZC",
"LON_ZC",
]
names = [x.lower() for x in header]
logger.info("Dataframe and db column names")
logger.info(names)
dtypes = {
names[0]: object,
names[1]: np.float64,
names[2]: np.float64,
names[3]: np.float64,
names[4]: np.float64,
names[5]: np.float64,
names[6]: np.float64,
}
try:
df_raw = pd.read_csv(data_path + data_file, sep="\t", dtype=dtypes, names=names, header=0)
logger.info("CSV file successfully read.")
except:
logger.error("error reading CSV file.")
df_raw
# establish db connection and cursor
conn = sqlite3.connect(db_path + db_file)
cursor = conn.cursor()
df_raw.to_sql(
"geo_zipcodes",
conn,
if_exists="append",
index=False,
method="multi",
)
conn.commit()
conn.close()
###Output
_____no_output_____ |
scripts/data/process_ecmwf.ipynb | ###Markdown
Load IFS data
###Code
ds_ifs_pl = xr.open_mfdataset('../data/raw/ifs/pl_*.nc', parallel=True, chunks={'time': 2})
ds_ifs_sfc = xr.open_mfdataset('../data/raw/ifs/sfc_*.nc', parallel=True, chunks={'time': 2})
ds_ifs_t = ds_ifs_pl['t'].to_dataset('level')
ds_ifs_t = ds_ifs_t.rename({500: 't_500', 850: 't_850'})
ds_ifs_gh = ds_ifs_pl['gh'].to_dataset('level')
ds_ifs_gh = ds_ifs_gh.rename({500: 'gh_500', 850: 'gh_850'})
ds_ifs_merged = xr.merge([ds_ifs_sfc, ds_ifs_t, ds_ifs_gh])
ds_ifs_merged = ds_ifs_merged.isel(latitude=slice(None, None, -1))
ds_ifs_merged = ds_ifs_merged.rename({'number': 'ensemble'})
ds_ifs_train = ds_ifs_merged.sel(time=slice('2017-01-01 00:00', '2018-12-31 12:00'))
ds_ifs_train.to_zarr(
'../data/processed/ifs/ds_train',
encoding={
't2m': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
't_500': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
't_850': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
'gh_500': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
'gh_850': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
}
)
ds_ifs_test = ds_ifs_merged.sel(time=slice('2019-01-01 00:00', '2019-12-31 12:00'))
ds_ifs_test.to_zarr(
'../data/processed/ifs/ds_test',
encoding={
't2m': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
't_500': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
't_850': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
'gh_500': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
'gh_850': {'dtype': 'float32', 'scale_factor': 1.0, 'add_offset': 0.0},
}
)
###Output
_____no_output_____ |
examples/notebooks/46_local_rf_training.ipynb | ###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap scikit-learn
###Output
_____no_output_____
###Markdown
How to use locally trained machine learning models with GEEThis notebook illustrates how to train a random forest (or any other ensemble tree estimator) locally using scikit-learn, convert the estimator into a string representation that Earth Engine can interpret, and how to apply the machine learning model with EE. **The notebook and the geemap machine learning module ([ml.py](https://geemap.org/ml/)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.**
###Code
import ee
import geemap
import pandas as pd
from geemap import ml
from sklearn import ensemble
geemap.ee_initialize()
###Output
_____no_output_____
###Markdown
Train a model locally using scikit-learnIn this demo, we are going to use the training data from [here](https://github.com/giswqs/geemap/blob/master/examples/data/rf_example.csv).
###Code
# read the feature table to train our RandomForest model
# data taken from ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels')
url = "https://raw.githubusercontent.com/giswqs/geemap/master/examples/data/rf_example.csv"
df = pd.read_csv(url)
df
# specify the names of the features (i.e. band names) and label
# feature names used to extract out features and define what bands
feature_names = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']
label = "landcover"
# get the features and labels into seperate variables
X = df[feature_names]
y = df[label]
# create a classifier and fit
n_trees = 10
rf = ensemble.RandomForestClassifier(n_trees).fit(X,y)
###Output
_____no_output_____
###Markdown
Convert a sklearn classifier object to a list of strings
###Code
# convert the estimator into a list of strings
# this function also works with the ensemble.ExtraTrees estimator
trees = ml.rf_to_strings(rf,feature_names)
# print the first tree to see the result
print(trees[0])
print(trees[1])
# number of trees we converted should equal the number of trees we defined for the model
len(trees) == n_trees
###Output
_____no_output_____
###Markdown
Convert sklearn classifier to GEE classifierAt this point you can take the list of strings and save them locally to avoid training again. However, we want to use the model with EE so we need to create an ee.Classifier and persist the data on ee for best results.
###Code
# create a ee classifier to use with ee objects from the trees
ee_classifier = ml.strings_to_classifier(trees)
# ee_classifier.getInfo()
###Output
_____no_output_____
###Markdown
Classify image using GEE classifier
###Code
# Make a cloud-free Landsat 8 TOA composite (from raw imagery).
l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1');
image = ee.Algorithms.Landsat.simpleComposite(
collection= l8.filterDate('2018-01-01', '2018-12-31'),
asFloat= True
)
# classify the image using the classifier we created from the local training
# note: here we select the feature_names from the image that way the classifier knows which bands to use
classified = image.select(feature_names).classify(ee_classifier)
# display results
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Yay!! 🎉 Looks like our example works. Don't party too much because there is a catch...This workflow has several limitations particularly due to how much data you can pass from the client to the server and how large of a model ee can acutally handle. EE can only handle 40MB of data passed to the server, so if you have a lot of large decision tree strings then this will not work. Also, creating a classifier from strings has limitation (see this ee-forum discussion: https://groups.google.com/g/google-earth-engine-developers/c/lFFU1GBPzi8/m/6MewQk1FBwAJ), this is again limited by string lengths when ee creates a computation graph.So, you can use this but know you will probably run into errors when training large models. Save trees to the cloudNow we have the strings in a format that ee can use, we want to save it for later use. There is a function to export a list of tree strings to a feature collection. The feature collection will have a pro
###Code
user_id = geemap.ee_user_id()
user_id
# specify asset id where to save trees
# be sure to change <user_name> to your ee user name
asset_id = user_id + "/random_forest_strings_test"
asset_id
# kick off an export process so it will be saved to the ee asset
ml.export_trees_to_fc(trees,asset_id)
# this will kick off an export task, so wait a few minutes before moving on
# read the exported tree feature collection
rf_fc = ee.FeatureCollection(asset_id)
# convert it to a classifier, very similar to the `ml.trees_to_classifier` function
another_classifier = ml.fc_to_classifier(rf_fc)
# classify the image again but with the classifier from the persisted trees
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Save trees locally
###Code
import os
out_csv = os.path.expanduser("~/Downloads/trees.csv")
ml.trees_to_csv(trees, out_csv)
another_classifier = ml.csv_to_classifier(out_csv)
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap scikit-learn
###Output
_____no_output_____
###Markdown
How to use locally trained machine learning models with GEEThis notebook illustrates how to train a random forest (or any other ensemble tree estimator) locally using scikit-learn, convert the estimator into a string representation that Earth Engine can interpret, and how to apply the machine learning model with EE. **The notebook and the geemap machine learning module ([ml.py](https://geemap.org/ml/)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.**
###Code
import ee
import geemap
import pandas as pd
from geemap import ml
from sklearn import ensemble
geemap.ee_initialize()
###Output
_____no_output_____
###Markdown
Train a model locally using scikit-learnIn this demo, we are going to use the training data from [here](https://github.com/giswqs/geemap/blob/master/examples/data/rf_example.csv).
###Code
# read the feature table to train our RandomForest model
# data taken from ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels')
url = "https://raw.githubusercontent.com/giswqs/geemap/master/examples/data/rf_example.csv"
df = pd.read_csv(url)
df
# specify the names of the features (i.e. band names) and label
# feature names used to extract out features and define what bands
feature_names = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']
label = "landcover"
# get the features and labels into seperate variables
X = df[feature_names]
y = df[label]
# create a classifier and fit
n_trees = 100
rf = ensemble.RandomForestClassifier(n_trees).fit(X,y)
###Output
_____no_output_____
###Markdown
Convert a sklearn classifier object to a list of strings
###Code
# convert the estimator into a list of strings
# this function also works with the ensemble.ExtraTrees estimator
trees = ml.rf_to_strings(rf,feature_names)
# print the first tree to see the result
print(trees[0])
print(trees[1])
# number of trees we converted should equal the number of trees we defined for the model
len(trees) == n_trees
###Output
_____no_output_____
###Markdown
Convert sklearn classifier to GEE classifierAt this point you can take the list of strings and save them locally to avoid training again. However, we want to use the model with EE so we need to create an ee.Classifier and persist the data on ee for best results.
###Code
# create a ee classifier to use with ee objects from the trees
ee_classifier = ml.strings_to_classifier(trees)
# ee_classifier.getInfo()
###Output
_____no_output_____
###Markdown
Classify image using GEE classifier
###Code
# Make a cloud-free Landsat 8 TOA composite (from raw imagery).
l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1');
image = ee.Algorithms.Landsat.simpleComposite(
collection= l8.filterDate('2018-01-01', '2018-12-31'),
asFloat= True
)
# classify the image using the classifier we created from the local training
# note: here we select the feature_names from the image that way the classifier knows which bands to use
classified = image.select(feature_names).classify(ee_classifier)
# display results
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Yay!! 🎉 Looks like our example works. Don't party too much because there is a catch...This workflow has several limitations particularly due to how much data you can pass from the client to the server and how large of a model ee can acutally handle. EE can only handle 40MB of data passed to the server, so if you have a lot of large decision tree strings then this will not work. Also, creating a classifier from strings has limitation (see this ee-forum discussion: https://groups.google.com/g/google-earth-engine-developers/c/lFFU1GBPzi8/m/6MewQk1FBwAJ), this is again limited by string lengths when ee creates a computation graph.So, you can use this but know you will probably run into errors when training large models. Save trees to the cloudNow we have the strings in a format that ee can use, we want to save it for later use. There is a function to export a list of tree strings to a feature collection. The feature collection will have a pro
###Code
user_id = geemap.ee_user_id()
user_id
# specify asset id where to save trees
# be sure to change <user_name> to your ee user name
asset_id = user_id + "/random_forest_strings_test"
asset_id
# kick off an export process so it will be saved to the ee asset
# ml.export_trees_to_fc(trees,asset_id)
# this will kick off an export task, so wait a few minutes before moving on
# read the exported tree feature collection
rf_fc = ee.FeatureCollection(asset_id)
# convert it to a classifier, very similar to the `ml.trees_to_classifier` function
another_classifier = ml.fc_to_classifier(rf_fc)
# classify the image again but with the classifier from the persisted trees
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Save trees locally
###Code
import os
out_csv = os.path.expanduser("~/Downloads/trees.csv")
ml.trees_to_csv(trees, out_csv)
another_classifier = ml.csv_to_classifier(out_csv)
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap scikit-learn
###Output
_____no_output_____
###Markdown
How to use locally trained machine learning models with GEEThis notebook illustrates how to train a random forest (or any other ensemble tree estimator) locally using scikit-learn, convert the estimator into a string representation that Earth Engine can interpret, and how to apply the machine learning model with EE. **The notebook and the geemap machine learning module ([ml.py](https://geemap.org/ml/)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.**
###Code
import ee
import geemap
import pandas as pd
from geemap import ml
from sklearn import ensemble
geemap.ee_initialize()
###Output
_____no_output_____
###Markdown
Train a model locally using scikit-learnIn this demo, we are going to use the training data from [here](https://github.com/giswqs/geemap/blob/master/examples/data/rf_example.csv).
###Code
# read the feature table to train our RandomForest model
# data taken from ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels')
url = "https://raw.githubusercontent.com/giswqs/geemap/master/examples/data/rf_example.csv"
df = pd.read_csv(url)
df
# specify the names of the features (i.e. band names) and label
# feature names used to extract out features and define what bands
feature_names = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']
label = "landcover"
# get the features and labels into separate variables
X = df[feature_names]
y = df[label]
# create a classifier and fit
n_trees = 10
rf = ensemble.RandomForestClassifier(n_trees).fit(X,y)
###Output
_____no_output_____
###Markdown
Convert a sklearn classifier object to a list of strings
###Code
# convert the estimator into a list of strings
# this function also works with the ensemble.ExtraTrees estimator
trees = ml.rf_to_strings(rf,feature_names)
# print the first tree to see the result
print(trees[0])
print(trees[1])
# number of trees we converted should equal the number of trees we defined for the model
len(trees) == n_trees
###Output
_____no_output_____
###Markdown
Convert sklearn classifier to GEE classifierAt this point you can take the list of strings and save them locally to avoid training again. However, we want to use the model with EE so we need to create an ee.Classifier and persist the data on ee for best results.
###Code
# create a ee classifier to use with ee objects from the trees
ee_classifier = ml.strings_to_classifier(trees)
# ee_classifier.getInfo()
###Output
_____no_output_____
###Markdown
Classify image using GEE classifier
###Code
# Make a cloud-free Landsat 8 TOA composite (from raw imagery).
l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1');
image = ee.Algorithms.Landsat.simpleComposite(
collection= l8.filterDate('2018-01-01', '2018-12-31'),
asFloat= True
)
# classify the image using the classifier we created from the local training
# note: here we select the feature_names from the image that way the classifier knows which bands to use
classified = image.select(feature_names).classify(ee_classifier)
# display results
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Yay!! 🎉 Looks like our example works. Don't party too much because there is a catch...This workflow has several limitations particularly due to how much data you can pass from the client to the server and how large of a model ee can actually handle. EE can only handle 40MB of data passed to the server, so if you have a lot of large decision tree strings then this will not work. Also, creating a classifier from strings has limitation (see this ee-forum discussion: https://groups.google.com/g/google-earth-engine-developers/c/lFFU1GBPzi8/m/6MewQk1FBwAJ), this is again limited by string lengths when ee creates a computation graph.So, you can use this but know you will probably run into errors when training large models. Save trees to the cloudNow we have the strings in a format that ee can use, we want to save it for later use. There is a function to export a list of tree strings to a feature collection. The feature collection will have a pro
###Code
user_id = geemap.ee_user_id()
user_id
# specify asset id where to save trees
# be sure to change <user_name> to your ee user name
asset_id = user_id + "/random_forest_strings_test"
asset_id
# kick off an export process so it will be saved to the ee asset
ml.export_trees_to_fc(trees,asset_id)
# this will kick off an export task, so wait a few minutes before moving on
# read the exported tree feature collection
rf_fc = ee.FeatureCollection(asset_id)
# convert it to a classifier, very similar to the `ml.trees_to_classifier` function
another_classifier = ml.fc_to_classifier(rf_fc)
# classify the image again but with the classifier from the persisted trees
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Save trees locally
###Code
import os
out_csv = os.path.expanduser("~/Downloads/trees.csv")
ml.trees_to_csv(trees, out_csv)
another_classifier = ml.csv_to_classifier(out_csv)
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75,-122.25), zoom=11)
Map.addLayer(image,{"bands": ['B7', 'B5', 'B3'], "min":0.05, "max": 0.55, "gamma":1.5}, 'image')
Map.addLayer(classified, {"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},'classification')
Map
###Output
_____no_output_____
###Markdown
Uncomment the following line to install [geemap](https://geemap.org) if needed.
###Code
# !pip install geemap scikit-learn
###Output
_____no_output_____
###Markdown
How to use locally trained machine learning models with GEEThis notebook illustrates how to train a random forest (or any other ensemble tree estimator) locally using scikit-learn, convert the estimator into a string representation that Earth Engine can interpret, and how to apply the machine learning model with EE. **The notebook and the geemap machine learning module ([ml.py](https://geemap.org/ml/)) were contributed by [Kel Markert](https://github.com/KMarkert). A huge thank you to him.**
###Code
import ee
import geemap
import pandas as pd
from geemap import ml
from sklearn import ensemble
geemap.ee_initialize()
###Output
_____no_output_____
###Markdown
Train a model locally using scikit-learnIn this demo, we are going to use the training data from [here](https://github.com/giswqs/geemap/blob/master/examples/data/rf_example.csv).
###Code
# read the feature table to train our RandomForest model
# data taken from ee.FeatureCollection('GOOGLE/EE/DEMOS/demo_landcover_labels')
url = "https://raw.githubusercontent.com/giswqs/geemap/master/examples/data/rf_example.csv"
df = pd.read_csv(url)
df
# specify the names of the features (i.e. band names) and label
# feature names used to extract out features and define what bands
feature_names = ['B2', 'B3', 'B4', 'B5', 'B6', 'B7']
label = "landcover"
# get the features and labels into separate variables
X = df[feature_names]
y = df[label]
# create a classifier and fit
n_trees = 10
rf = ensemble.RandomForestClassifier(n_trees).fit(X, y)
###Output
_____no_output_____
###Markdown
Convert a sklearn classifier object to a list of strings
###Code
# convert the estimator into a list of strings
# this function also works with the ensemble.ExtraTrees estimator
trees = ml.rf_to_strings(rf, feature_names)
# print the first tree to see the result
print(trees[0])
print(trees[1])
# number of trees we converted should equal the number of trees we defined for the model
len(trees) == n_trees
###Output
_____no_output_____
###Markdown
Convert sklearn classifier to GEE classifierAt this point you can take the list of strings and save them locally to avoid training again. However, we want to use the model with EE so we need to create an ee.Classifier and persist the data on ee for best results.
###Code
# create a ee classifier to use with ee objects from the trees
ee_classifier = ml.strings_to_classifier(trees)
# ee_classifier.getInfo()
###Output
_____no_output_____
###Markdown
Classify image using GEE classifier
###Code
# Make a cloud-free Landsat 8 TOA composite (from raw imagery).
l8 = ee.ImageCollection('LANDSAT/LC08/C01/T1')
image = ee.Algorithms.Landsat.simpleComposite(
collection=l8.filterDate('2018-01-01', '2018-12-31'), asFloat=True
)
# classify the image using the classifier we created from the local training
# note: here we select the feature_names from the image that way the classifier knows which bands to use
classified = image.select(feature_names).classify(ee_classifier)
# display results
Map = geemap.Map(center=(37.75, -122.25), zoom=11)
Map.addLayer(
image,
{"bands": ['B7', 'B5', 'B3'], "min": 0.05, "max": 0.55, "gamma": 1.5},
'image',
)
Map.addLayer(
classified,
{"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},
'classification',
)
Map
###Output
_____no_output_____
###Markdown
Yay!! 🎉 Looks like our example works. Don't party too much because there is a catch...This workflow has several limitations particularly due to how much data you can pass from the client to the server and how large of a model ee can actually handle. EE can only handle 40MB of data passed to the server, so if you have a lot of large decision tree strings then this will not work. Also, creating a classifier from strings has limitation (see this ee-forum discussion: https://groups.google.com/g/google-earth-engine-developers/c/lFFU1GBPzi8/m/6MewQk1FBwAJ), this is again limited by string lengths when ee creates a computation graph.So, you can use this but know you will probably run into errors when training large models. Save trees to the cloudNow we have the strings in a format that ee can use, we want to save it for later use. There is a function to export a list of tree strings to a feature collection. The feature collection will have a pro
###Code
user_id = geemap.ee_user_id()
user_id
# specify asset id where to save trees
# be sure to change <user_name> to your ee user name
asset_id = user_id + "/random_forest_strings_test"
asset_id
# kick off an export process so it will be saved to the ee asset
ml.export_trees_to_fc(trees, asset_id)
# this will kick off an export task, so wait a few minutes before moving on
# read the exported tree feature collection
rf_fc = ee.FeatureCollection(asset_id)
# convert it to a classifier, very similar to the `ml.trees_to_classifier` function
another_classifier = ml.fc_to_classifier(rf_fc)
# classify the image again but with the classifier from the persisted trees
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75, -122.25), zoom=11)
Map.addLayer(
image,
{"bands": ['B7', 'B5', 'B3'], "min": 0.05, "max": 0.55, "gamma": 1.5},
'image',
)
Map.addLayer(
classified,
{"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},
'classification',
)
Map
###Output
_____no_output_____
###Markdown
Save trees locally
###Code
import os
out_csv = os.path.expanduser("~/Downloads/trees.csv")
ml.trees_to_csv(trees, out_csv)
another_classifier = ml.csv_to_classifier(out_csv)
classified = image.select(feature_names).classify(another_classifier)
# display results
# we should get the exact same results as before
Map = geemap.Map(center=(37.75, -122.25), zoom=11)
Map.addLayer(
image,
{"bands": ['B7', 'B5', 'B3'], "min": 0.05, "max": 0.55, "gamma": 1.5},
'image',
)
Map.addLayer(
classified,
{"min": 0, "max": 2, "palette": ['red', 'green', 'blue']},
'classification',
)
Map
###Output
_____no_output_____ |
week2_model_based/practice_vi.ipynb | ###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely areare you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0': {
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1': {
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2': {
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
_____no_output_____
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
_____no_output_____
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
<YOUR CODE>
return Q
import numpy as np
test_Vs = {s: i for i, s in enumerate(mdp.get_all_states())}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
return <YOUR CODE>
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : new_V(state)}
new_state_values = <YOUR CODE>
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | " % (i, diff), end="")
print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
_____no_output_____
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state):
return None
<YOUR CODE>
return <YOUR CODE>
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert 0.85 < np.mean(rewards) < 1.0
###Output
_____no_output_____
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : new_V(state)}
new_state_values = <YOUR CODE>
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f " % (i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
_____no_output_____
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3, 3))
h, w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w, h), cmap='gray', interpolation='none', clim=(0, 1))
ax = plt.gca()
ax.set_xticks(np.arange(h) - .5)
ax.set_yticks(np.arange(w) - .5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down': (0, -1), 'right': (1, 0), 'up': (-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y, x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y, u * .3, -v * .3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from time import sleep
from IPython.display import clear_output
mdp = FrozenLakeEnv(map_name='8x8', slip_chance=0.1)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
_____no_output_____
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 1.0 <= np.mean(total_rewards) <= 1.0
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.8 <= np.mean(total_rewards) <= 0.95
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.6 <= np.mean(total_rewards) <= 0.7
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.6 <= np.mean(total_rewards) <= 0.8
print("Well done!")
###Output
_____no_output_____
###Markdown
Submit to coursera
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
<EMAIL>,
<TOKEN>)
###Output
_____no_output_____
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely are you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience._This notebook is inspired by the awesome_ [CS294](https://github.com/berkeleydeeprlcourse/homework/blob/36a0b58261acde756abd55306fbe63df226bf62b/hw2/HW2.ipynb) _by Berkeley_ For starters, let's define a simple MDP from this picture:
###Code
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week2_model_based/submit.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week2_model_based/mdp.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
transition_probs = {
's0': {
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1': {
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2': {
'a0': {'s0': 0.4, 's2': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s0', 's1', 's2')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s0': 0.7, 's1': 0.1, 's2': 0.2}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Optional: Visualizing MDPsYou can also visualize any MDP with the drawing fuction donated by [neer201](https://github.com/neer201).You have to install graphviz for system and for python. 1. * For ubuntu just run: `sudo apt-get install graphviz` * For OSX: `brew install graphviz`2. `pip install graphviz`3. restart the notebook__Note:__ Installing graphviz on some OS (esp. Windows) may be tricky. However, you can ignore this part alltogether and use the standart vizualization.
###Code
from mdp import has_graphviz
from IPython.display import display
print("Graphviz available:", has_graphviz)
if has_graphviz:
from mdp import plot_graph, plot_graph_with_state_values, plot_graph_optimal_strategy_and_state_values
display(plot_graph(mdp))
###Output
_____no_output_____
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0
for next_state in mdp.get_all_states():
Q += mdp.get_transition_prob(state, action, next_state)\
* (mdp.get_reward(state, action, next_state)
+ gamma * state_values[next_state])
return Q
import numpy as np
test_Vs = {s: i for i, s in enumerate(sorted(mdp.get_all_states()))}
assert np.isclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.isclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as in formula above. Please do not change state_values in process. """
if mdp.is_terminal(state):
return 0
Qs = []
for action in mdp.get_possible_actions(state):
Q = get_action_value(mdp, state_values, state, action, gamma)
Qs.append(Q)
return np.max(Qs)
test_Vs_copy = dict(test_Vs)
assert np.isclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.isclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 1.08)
assert np.isclose(get_new_state_value(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9), -13500000000.0), \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
assert test_Vs == test_Vs_copy, "Please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
# stop VI if new values are this close to old values (or closer)
min_difference = 0.001
# initialize V(s)
state_values = {s: 0 for s in mdp.get_all_states()}
if has_graphviz:
display(plot_graph_with_state_values(mdp, state_values))
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : float V_new(state)}
new_state_values = {
state: get_new_state_value(mdp, state_values, state, gamma) for state in mdp.get_all_states()
}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s])
for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | " % (i, diff), end="")
print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items()))
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
if has_graphviz:
display(plot_graph_with_state_values(mdp, state_values))
print("Final state values:", state_values)
assert abs(state_values['s0'] - 3.781) < 0.01
assert abs(state_values['s1'] - 7.294) < 0.01
assert abs(state_values['s2'] - 4.202) < 0.01
###Output
Final state values: {'s0': 3.7810348735476405, 's1': 7.294006423867229, 's2': 4.202140275227048}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state):
return None
Qs = []
possible_actions = mdp.get_possible_actions(state)
for action in possible_actions:
Q = get_action_value(mdp, state_values, state, action, gamma)
Qs.append(Q)
return possible_actions[np.argmax(Qs)]
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a1'
assert get_optimal_action(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9) == 'a0', \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
assert get_optimal_action(mdp, {'s0': -2e10, 's1': 0, 's2': -1e10}, 's0', 0.9) == 'a1', \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
if has_graphviz:
display(plot_graph_optimal_strategy_and_state_values(mdp, state_values, get_action_value))
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.40 < np.mean(rewards) < 0.55)
###Output
average reward: 0.4374
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {
state: get_new_state_value(mdp, state_values, state, gamma) for state in mdp.get_all_states()
}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s])
for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f " %
(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done:
break
###Output
*FFF
FHFH
FFFH
HFFG
down
SFFF
*HFH
FFFH
HFFG
down
SFFF
FHFH
*FFH
HFFG
right
SFFF
FHFH
F*FH
HFFG
down
SFFF
FHFH
FFFH
H*FG
right
SFFF
FHFH
FFFH
HF*G
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3, 3))
h, w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w, h), cmap='gray', interpolation='none', clim=(0, 1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down': (0, -1), 'right': (1, 0), 'up': (0, 1)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y, x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None:
continue
u, v = a2uv[a]
plt.arrow(x, y, u*.3, -v*.3, color='m',
head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8', slip_chance=0.1)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
iter 0 | diff: 0.80000 | V(start): 0.000
iter 1 | diff: 0.57600 | V(start): 0.000
iter 2 | diff: 0.41472 | V(start): 0.000
iter 3 | diff: 0.29860 | V(start): 0.000
iter 4 | diff: 0.24186 | V(start): 0.000
iter 5 | diff: 0.19349 | V(start): 0.000
iter 6 | diff: 0.15325 | V(start): 0.000
iter 7 | diff: 0.12288 | V(start): 0.000
iter 8 | diff: 0.09930 | V(start): 0.000
iter 9 | diff: 0.08037 | V(start): 0.000
iter 10 | diff: 0.06426 | V(start): 0.000
iter 11 | diff: 0.05129 | V(start): 0.000
iter 12 | diff: 0.04330 | V(start): 0.000
iter 13 | diff: 0.03802 | V(start): 0.033
iter 14 | diff: 0.03332 | V(start): 0.058
iter 15 | diff: 0.02910 | V(start): 0.087
iter 16 | diff: 0.01855 | V(start): 0.106
iter 17 | diff: 0.01403 | V(start): 0.120
iter 18 | diff: 0.00810 | V(start): 0.128
iter 19 | diff: 0.00555 | V(start): 0.133
iter 20 | diff: 0.00321 | V(start): 0.137
iter 21 | diff: 0.00247 | V(start): 0.138
iter 22 | diff: 0.00147 | V(start): 0.139
iter 23 | diff: 0.00104 | V(start): 0.140
iter 24 | diff: 0.00058 | V(start): 0.140
iter 25 | diff: 0.00036 | V(start): 0.141
iter 26 | diff: 0.00024 | V(start): 0.141
iter 27 | diff: 0.00018 | V(start): 0.141
iter 28 | diff: 0.00012 | V(start): 0.141
iter 29 | diff: 0.00007 | V(start): 0.141
iter 30 | diff: 0.00004 | V(start): 0.141
iter 31 | diff: 0.00003 | V(start): 0.141
iter 32 | diff: 0.00001 | V(start): 0.141
iter 33 | diff: 0.00001 | V(start): 0.141
average reward: 0.74
Well done!
###Markdown
Submit to courseraIf your submission doesn't finish in 30 seconds, set `verbose=True` and try again.
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
'[email protected]',
'gzLgy6As5x1xGmOM',
verbose=False,
)
###Output
Submitted to Coursera platform. See results on assignment page!
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely areare you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0':{
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1':{
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2':{
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2':0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
_____no_output_____
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
_____no_output_____
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
<YOUR CODE>
return Q
import numpy as np
test_Vs = {s : i for i, s in enumerate(mdp.get_all_states())}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
return <YOUR CODE>
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = <YOUR CODE>
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | "%(i, diff), end="")
print(' '.join("V(%s) = %.3f"%(s, v) for s,v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
_____no_output_____
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state): return None
<YOUR CODE>
return <YOUR CODE>
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.85 < np.mean(rewards) < 1.0)
###Output
_____no_output_____
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma = 0.9, num_iter = 1000, min_difference = 1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = <YOUR CODE>
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f "%(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
_____no_output_____
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3,3))
h,w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w,h), cmap='gray', interpolation='none', clim=(0,1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down':(0, -1), 'right':(1,0), 'up':(-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y,x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8',slip_chance=0.1)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
_____no_output_____
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
_____no_output_____
###Markdown
Submit to coursera
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
<EMAIL>,
<TOKEN>)
###Output
_____no_output_____
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely areare you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0':{
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1':{
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2':{
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2':0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
_____no_output_____
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
_____no_output_____
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
<YOUR CODE>
return Q
import numpy as np
test_Vs = {s : i for i, s in enumerate(mdp.get_all_states())}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
return <YOUR CODE>
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = <YOUR CODE>
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | "%(i, diff), end="")
print(' '.join("V(%s) = %.3f"%(s, v) for s,v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
_____no_output_____
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state): return None
<YOUR CODE>
return <YOUR CODE>
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.85 < np.mean(rewards) < 1.0)
###Output
_____no_output_____
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma = 0.9, num_iter = 1000, min_difference = 1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = <YOUR CODE>
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f "%(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
_____no_output_____
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3,3))
h,w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w,h), cmap='gray', interpolation='none', clim=(0,1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down':(0, -1), 'right':(1,0), 'up':(-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y,x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8',slip_chance=0.1)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
_____no_output_____
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
_____no_output_____
###Markdown
Submit to coursera
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
<EMAIL>,
<TOKEN>)
###Output
_____no_output_____
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely are you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience._This notebook is inspired by the awesome_ [CS294](https://github.com/berkeleydeeprlcourse/homework/blob/36a0b58261acde756abd55306fbe63df226bf62b/hw2/HW2.ipynb) _by Berkeley_ For starters, let's define a simple MDP from this picture:
###Code
# If you Colab, uncomment this please
# !wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/week02_value_based/mdp.py
transition_probs = {
's0': {
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1': {
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2': {
'a0': {'s0': 0.4, 's2': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ",
mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s2', 's1', 's0')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s2': 0.2, 's1': 0.1, 's0': 0.7}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Optional: Visualizing MDPsYou can also visualize any MDP with the drawing fuction donated by [neer201](https://github.com/neer201).You have to install graphviz for system and for python. For ubuntu just run:1. `sudo apt-get install graphviz`2. `pip install graphviz`3. restart the notebook__Note:__ Installing graphviz on some OS (esp. Windows) may be tricky. However, you can ignore this part alltogether and use the standart vizualization.
###Code
from mdp import has_graphviz
from IPython.display import display
print("Graphviz available:", has_graphviz)
if has_graphviz:
from mdp import plot_graph, plot_graph_with_state_values, \
plot_graph_optimal_strategy_and_state_values
display(plot_graph(mdp))
###Output
_____no_output_____
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
%%writefile mdp_get_action_value.py
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
# YOUR CODE HERE
# state_values: dictionary of state:index
Q = 0
for state_prime in mdp.get_next_states(state, action):
reward = mdp.get_reward(state, action, state_prime)
Q += mdp.get_transition_prob(state, action, state_prime) * (reward+gamma*state_values[state_prime])
return Q
from mdp_get_action_value import get_action_value
import numpy as np
test_Vs = {s: i for i, s in enumerate(sorted(mdp.get_all_states()))}
print(test_Vs)
assert np.isclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.isclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
{'s2': 2, 's1': 1, 's0': 0}
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as in formula above. Please do not change state_values in process. """
if mdp.is_terminal(state):
return 0
Qi = []
possible_actions = mdp.get_possible_actions(state)
for action_prime in possible_actions:
Q = get_action_value(mdp, state_values, state, action_prime, gamma)
Qi.append(Q)
return np.max(Qi)
test_Vs_copy = dict(test_Vs)
assert np.isclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.isclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 1.08)
assert test_Vs == test_Vs_copy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
# stop VI if new values are this close to old values (or closer)
min_difference = 0.001
# initialize V(s)
state_values = {s: 0 for s in mdp.get_all_states()}
if has_graphviz:
display(plot_graph_with_state_values(mdp, state_values))
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : float V_new(state)}
new_state_values = {state:get_new_state_value(mdp, state_values, state, gamma) for state in mdp.get_all_states()}
# print(new_state_values)
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s])
for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | " % (i, diff), end="")
print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items()))
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
if has_graphviz:
display(plot_graph_with_state_values(mdp, state_values))
print("Final state values:", state_values)
assert abs(state_values['s0'] - 3.781) < 0.01
assert abs(state_values['s1'] - 7.294) < 0.01
assert abs(state_values['s2'] - 4.202) < 0.01
###Output
Final state values: {'s2': 4.202140275227047, 's1': 7.294006423867229, 's0': 3.7810348735476396}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state):
return None
# <YOUR CODE HERE>
Qi = []
possible_actions = mdp.get_possible_actions(state)
# print(possible_actions)
for action_prime in possible_actions:
Q = get_action_value(mdp, state_values, state, action_prime, gamma)
Qi.append(Q)
# print(Qi)
return possible_actions[np.argmax(Qi)]
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a1'
if has_graphviz:
try:
display(plot_graph_optimal_strategy_and_state_values(mdp, state_values))
except ImportError:
raise ImportError("Run the cell that starts with \"%%writefile mdp_get_action_value.py\"")
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.40 < np.mean(rewards) < 0.55)
###Output
average reward: 0.4466
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {state:get_new_state_value(mdp, state_values, state, gamma) for state in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s])
for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f " %
(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done:
break
###Output
*FFF
FHFH
FFFH
HFFG
right
S*FF
FHFH
FFFH
HFFG
right
SF*F
FHFH
FFFH
HFFG
down
SFFF
FH*H
FFFH
HFFG
down
SFFF
FHFH
FF*H
HFFG
down
SFFF
FHFH
FFFH
HF*G
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3, 3))
h, w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w, h), cmap='gray', interpolation='none', clim=(0, 1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down': (0, -1), 'right': (1, 0), 'up': (0, 1)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y, x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None:
continue
u, v = a2uv[a]
plt.arrow(x, y, u*.3, -v*.3, color='m',
head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8', slip_chance=0.1)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
iter 0 | diff: 0.80000 | V(start): 0.000
iter 1 | diff: 0.57600 | V(start): 0.000
iter 2 | diff: 0.41472 | V(start): 0.000
iter 3 | diff: 0.29860 | V(start): 0.000
iter 4 | diff: 0.24186 | V(start): 0.000
iter 5 | diff: 0.19349 | V(start): 0.000
iter 6 | diff: 0.15325 | V(start): 0.000
iter 7 | diff: 0.12288 | V(start): 0.000
iter 8 | diff: 0.09930 | V(start): 0.000
iter 9 | diff: 0.08037 | V(start): 0.000
iter 10 | diff: 0.06426 | V(start): 0.000
iter 11 | diff: 0.05129 | V(start): 0.000
iter 12 | diff: 0.04330 | V(start): 0.000
iter 13 | diff: 0.03802 | V(start): 0.033
iter 14 | diff: 0.03332 | V(start): 0.058
iter 15 | diff: 0.02910 | V(start): 0.087
iter 16 | diff: 0.01855 | V(start): 0.106
iter 17 | diff: 0.01403 | V(start): 0.120
iter 18 | diff: 0.00810 | V(start): 0.128
iter 19 | diff: 0.00555 | V(start): 0.133
iter 20 | diff: 0.00321 | V(start): 0.137
iter 21 | diff: 0.00247 | V(start): 0.138
iter 22 | diff: 0.00147 | V(start): 0.139
iter 23 | diff: 0.00104 | V(start): 0.140
iter 24 | diff: 0.00058 | V(start): 0.140
iter 25 | diff: 0.00036 | V(start): 0.141
iter 26 | diff: 0.00024 | V(start): 0.141
iter 27 | diff: 0.00018 | V(start): 0.141
iter 28 | diff: 0.00012 | V(start): 0.141
iter 29 | diff: 0.00007 | V(start): 0.141
iter 30 | diff: 0.00004 | V(start): 0.141
iter 31 | diff: 0.00003 | V(start): 0.141
iter 32 | diff: 0.00001 | V(start): 0.141
iter 33 | diff: 0.00001 | V(start): 0.141
average reward: 0.748
Well done!
###Markdown
Submit to courseraIf your submission doesn't finish in 30 seconds, set `verbose=True` and try again.
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
"",
"",
verbose=False,
)
###Output
Submitted to Coursera platform. See results on assignment page!
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely are you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0': {
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1': {
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2': {
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s0', 's1', 's2')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s0': 0.7, 's1': 0.1, 's2': 0.2}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0
for s, p in mdp.get_next_states(state, action).items():
Q += p * (mdp.get_reward(state, action, s) + gamma * state_values[s])
return Q
import numpy as np
test_Vs = {s: i for i, s in enumerate(sorted(mdp.get_all_states()))}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
return max([get_action_value(mdp, state_values, state, a, gamma) for a in mdp.get_possible_actions(state)])
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : new_V(state)}
new_state_values = {state: get_new_state_value(mdp, state_values, state, gamma) for state in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | " % (i, diff), end="")
print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
Final state values: {'s0': 8.023123818663871, 's1': 11.163174814980803, 's2': 8.915559364985523}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state):
return None
Q = {action: get_action_value(mdp, state_values, state, action, gamma) for action in mdp.get_possible_actions(state)}
return sorted(Q.keys(), key=lambda x: Q[x], reverse=True)[0]
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert 0.85 < np.mean(rewards) < 1.0
###Output
average reward: 0.9275
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : new_V(state)}
new_state_values = {state: get_new_state_value(mdp, state_values, state, gamma) for state in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f " % (i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
*FFF
FHFH
FFFH
HFFG
down
SFFF
*HFH
FFFH
HFFG
down
SFFF
FHFH
*FFH
HFFG
right
SFFF
FHFH
F*FH
HFFG
down
SFFF
FHFH
FFFH
H*FG
right
SFFF
FHFH
FFFH
HF*G
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3, 3))
h, w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w, h), cmap='gray', interpolation='none', clim=(0, 1))
ax = plt.gca()
ax.set_xticks(np.arange(h) - .5)
ax.set_yticks(np.arange(w) - .5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down': (0, -1), 'right': (1, 0), 'up': (-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y, x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y, u * .3, -v * .3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from time import sleep
from IPython.display import clear_output
mdp = FrozenLakeEnv(map_name='8x8', slip_chance=0.1)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
Terminated
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 1.0 <= np.mean(total_rewards) <= 1.0
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.8 <= np.mean(total_rewards) <= 0.95
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.6 <= np.mean(total_rewards) <= 0.7
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.6 <= np.mean(total_rewards) <= 0.8
print("Well done!")
###Output
iter 0 | diff: 0.80000 | V(start): 0.000
iter 1 | diff: 0.57600 | V(start): 0.000
iter 2 | diff: 0.41472 | V(start): 0.000
iter 3 | diff: 0.29860 | V(start): 0.000
iter 4 | diff: 0.24186 | V(start): 0.000
iter 5 | diff: 0.19349 | V(start): 0.000
iter 6 | diff: 0.15325 | V(start): 0.000
iter 7 | diff: 0.12288 | V(start): 0.000
iter 8 | diff: 0.09930 | V(start): 0.000
iter 9 | diff: 0.08037 | V(start): 0.000
iter 10 | diff: 0.06426 | V(start): 0.000
iter 11 | diff: 0.05129 | V(start): 0.000
iter 12 | diff: 0.04330 | V(start): 0.000
iter 13 | diff: 0.03802 | V(start): 0.033
iter 14 | diff: 0.03332 | V(start): 0.058
iter 15 | diff: 0.02910 | V(start): 0.087
iter 16 | diff: 0.01855 | V(start): 0.106
iter 17 | diff: 0.01403 | V(start): 0.120
iter 18 | diff: 0.00810 | V(start): 0.128
iter 19 | diff: 0.00555 | V(start): 0.133
iter 20 | diff: 0.00321 | V(start): 0.137
iter 21 | diff: 0.00247 | V(start): 0.138
iter 22 | diff: 0.00147 | V(start): 0.139
iter 23 | diff: 0.00104 | V(start): 0.140
iter 24 | diff: 0.00058 | V(start): 0.140
iter 25 | diff: 0.00036 | V(start): 0.141
iter 26 | diff: 0.00024 | V(start): 0.141
iter 27 | diff: 0.00018 | V(start): 0.141
iter 28 | diff: 0.00012 | V(start): 0.141
iter 29 | diff: 0.00007 | V(start): 0.141
iter 30 | diff: 0.00004 | V(start): 0.141
iter 31 | diff: 0.00003 | V(start): 0.141
iter 32 | diff: 0.00001 | V(start): 0.141
iter 33 | diff: 0.00001 | V(start): 0.141
Terminated
average reward: 0.737
Well done!
###Markdown
Submit to courseraIf your submission doesn't finish in 30 seconds, set `verbose=True` and try again.
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
"[email protected]",
"QUavadG12vbE12ht",
verbose=False,
)
###Output
Submitted to Coursera platform. See results on assignment page!
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely areare you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0':{
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1':{
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2':{
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2':0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s2', 's0', 's1')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s2': 0.2, 's0': 0.7, 's1': 0.1}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0.
state_values = {s : i for i, s in enumerate(sorted(state_values))}
state_probs = mdp.get_next_states(state, action)
for next_s, p in state_probs.items():
r = float(mdp.get_reward(state, action, next_s))
v = float(state_values[next_s])
Q += p*(r+gamma*v)
return Q
import numpy as np
test_Vs = {s : i for i, s in enumerate(mdp.get_all_states())}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
actions = mdp.get_possible_actions(state)
values = [get_action_value(mdp, state_values, state, action, gamma) \
for action in actions]
new_state_value = np.max(values)
return new_state_value
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0.
state_probs = mdp.get_next_states(state, action)
for next_s, p in state_probs.items():
r = float(mdp.get_reward(state, action, next_s))
v = float(state_values[next_s])
Q += p*(r+gamma*v)
return Q
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s : 0 for s in mdp.get_all_states()}
print(state_values)
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s : get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | "%(i, diff), end="")
print(' '.join("V(%s) = %.3f"%(s, v) for s,v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
Final state values: {'s2': 8.915559364985523, 's0': 8.023123818663871, 's1': 11.163174814980803}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state): return None
action_values = {action : get_action_value(mdp, state_values, state, action, gamma) \
for action in mdp.get_possible_actions(state)}
print(action_values)
optimal_action = max(action_values, key=action_values.get)
return optimal_action
get_optimal_action(mdp, state_values, 's0', gamma)
d = {'a':0, 'b':1, 'c':3}
d.get('a')
get_optimal_action(mdp, state_values, 's0', gamma)
get_optimal_action(mdp, state_values, 's1', gamma)
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.85 < np.mean(rewards) < 1.0)
###Output
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 11.164054424803906, 'a1': 9.945714638232934}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
{'a0': 7.622407432642227, 'a1': 8.02400342848697}
{'a0': 8.916438974808628, 'a1': 8.089902002478851}
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma = 0.9, num_iter = 1000, min_difference = 1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s : get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f "%(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
*FFF
FHFH
FFFH
HFFG
{'right': 0.5904900000000002, 'left': 0.5314410000000002, 'down': 0.5904900000000002, 'up': 0.5314410000000002}
right
S*FF
FHFH
FFFH
HFFG
{'right': 0.6561000000000001, 'left': 0.5314410000000002, 'down': 0.0, 'up': 0.5904900000000002}
right
SF*F
FHFH
FFFH
HFFG
{'right': 0.5904900000000002, 'left': 0.5904900000000002, 'down': 0.7290000000000001, 'up': 0.6561000000000001}
down
SFFF
FH*H
FFFH
HFFG
{'right': 0.0, 'left': 0.0, 'down': 0.81, 'up': 0.6561000000000001}
down
SFFF
FHFH
FF*H
HFFG
{'right': 0.0, 'left': 0.7290000000000001, 'down': 0.9, 'up': 0.7290000000000001}
down
SFFF
FHFH
FFFH
HF*G
{'right': 1.0, 'left': 0.81, 'down': 0.9, 'up': 0.81}
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3,3))
h,w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w,h), cmap='gray', interpolation='none', clim=(0,1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down':(0, -1), 'right':(1,0), 'up':(-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y,x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8',slip_chance=0.1)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
Terminated
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
_____no_output_____
###Markdown
Submit to coursera
###Code
transition_probs = {
's0':{
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1':{
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2':{
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2':0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0.
state_values = {s : i for i, s in enumerate(sorted(state_values))}
state_probs = mdp.get_next_states(state, action)
for next_s, p in state_probs.items():
r = float(mdp.get_reward(state, action, next_s))
v = float(state_values[next_s])
Q += p*(r+gamma*v)
return Q
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0.
state_values = {s : i for i, s in enumerate(sorted(state_values))}
state_probs = mdp.get_next_states(state, action)
for next_s, p in state_probs.items():
r = float(mdp.get_reward(state, action, next_s))
v = float(state_values[next_s])
Q += p*(r+gamma*v)
return Q
if mdp.is_terminal(state): return 0
actions = mdp.get_possible_actions(state)
values = [get_action_value(mdp, state_values, state, action, gamma) \
for action in actions]
new_state_value = np.max(values)
return new_state_value
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0.
state_probs = mdp.get_next_states(state, action)
for next_s, p in state_probs.items():
r = float(mdp.get_reward(state, action, next_s))
v = float(state_values[next_s])
Q += p*(r+gamma*v)
return Q
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
actions = mdp.get_possible_actions(state)
values = [get_action_value(mdp, state_values, state, action, gamma) \
for action in actions]
new_state_value = np.max(values)
return new_state_value
if mdp.is_terminal(state): return None
action_values = {action : get_action_value(mdp, state_values, state, action, gamma) \
for action in mdp.get_possible_actions(state)}
print(action_values)
optimal_action = max(action_values, key=action_values.get)
return optimal_action
def value_iteration(mdp, state_values=None, gamma = 0.9, num_iter = 1000, min_difference = 1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
Q = 0.
state_probs = mdp.get_next_states(state, action)
for next_s, p in state_probs.items():
r = float(mdp.get_reward(state, action, next_s))
v = float(state_values[next_s])
Q += p*(r+gamma*v)
return Q
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
actions = mdp.get_possible_actions(state)
values = [get_action_value(mdp, state_values, state, action, gamma) \
for action in actions]
new_state_value = np.max(values)
return new_state_value
state_values = state_values or {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s : get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f "%(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
return state_values
import numpy as np
test_Vs = {s : i for i, s in enumerate(mdp.get_all_states())}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
'[email protected]',
'TyhMT4GKfmegJgom'
)
###Output
_____no_output_____
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely are you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0':{
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1':{
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2':{
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2':0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s0', 's1', 's2')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s0': 0.7, 's1': 0.1, 's2': 0.2}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
#<YOUR CODE>
Q = sum(p*(mdp.get_reward(state,action,next_state) + gamma*state_values[next_state]) for next_state,p in mdp.get_next_states(state,action,).items())
return Q
import numpy as np
test_Vs = {s : i for i, s in enumerate(mdp.get_all_states())}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
#<YOUR CODE>
return max(get_action_value(mdp,state_values,state,action,gamma) for action in mdp.get_possible_actions(state))
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s:get_new_state_value(mdp,state_values,s,gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | "%(i, diff), end="")
print(' '.join("V(%s) = %.3f"%(s, v) for s,v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
Final state values: {'s0': 8.023123818663871, 's1': 11.163174814980803, 's2': 8.915559364985523}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state): return None
#<YOUR CODE>
max_Q = float("-inf")
max_action = None
for action in mdp.get_possible_actions(state):
q = get_action_value(mdp,state_values,state,action,gamma)
if q > max_Q:
max_Q = q
max_action = action
return max_action
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.85 < np.mean(rewards) < 1.0)
###Output
average reward: 0.902
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma = 0.9, num_iter = 1000, min_difference = 1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s:get_new_state_value(mdp,state_values,s,gamma) for s in mdp.get_all_states()}#<YOUR CODE>
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f "%(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
*FFF
FHFH
FFFH
HFFG
down
SFFF
*HFH
FFFH
HFFG
down
SFFF
FHFH
*FFH
HFFG
right
SFFF
FHFH
F*FH
HFFG
down
SFFF
FHFH
FFFH
H*FG
right
SFFF
FHFH
FFFH
HF*G
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3,3))
h,w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w,h), cmap='gray', interpolation='none', clim=(0,1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down':(0, -1), 'right':(1,0), 'up':(-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y,x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8',slip_chance=0.1)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
Terminated
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
iter 0 | diff: 0.80000 | V(start): 0.000
iter 1 | diff: 0.57600 | V(start): 0.000
iter 2 | diff: 0.41472 | V(start): 0.000
iter 3 | diff: 0.29860 | V(start): 0.000
iter 4 | diff: 0.24186 | V(start): 0.000
iter 5 | diff: 0.19349 | V(start): 0.000
iter 6 | diff: 0.15325 | V(start): 0.000
iter 7 | diff: 0.12288 | V(start): 0.000
iter 8 | diff: 0.09930 | V(start): 0.000
iter 9 | diff: 0.08037 | V(start): 0.000
iter 10 | diff: 0.06426 | V(start): 0.000
iter 11 | diff: 0.05129 | V(start): 0.000
iter 12 | diff: 0.04330 | V(start): 0.000
iter 13 | diff: 0.03802 | V(start): 0.033
iter 14 | diff: 0.03332 | V(start): 0.058
iter 15 | diff: 0.02910 | V(start): 0.087
iter 16 | diff: 0.01855 | V(start): 0.106
iter 17 | diff: 0.01403 | V(start): 0.120
iter 18 | diff: 0.00810 | V(start): 0.128
iter 19 | diff: 0.00555 | V(start): 0.133
iter 20 | diff: 0.00321 | V(start): 0.137
iter 21 | diff: 0.00247 | V(start): 0.138
iter 22 | diff: 0.00147 | V(start): 0.139
iter 23 | diff: 0.00104 | V(start): 0.140
iter 24 | diff: 0.00058 | V(start): 0.140
iter 25 | diff: 0.00036 | V(start): 0.141
iter 26 | diff: 0.00024 | V(start): 0.141
iter 27 | diff: 0.00018 | V(start): 0.141
iter 28 | diff: 0.00012 | V(start): 0.141
iter 29 | diff: 0.00007 | V(start): 0.141
iter 30 | diff: 0.00004 | V(start): 0.141
iter 31 | diff: 0.00003 | V(start): 0.141
iter 32 | diff: 0.00001 | V(start): 0.141
iter 33 | diff: 0.00001 | V(start): 0.141
Terminated
average reward: 0.748
Well done!
###Markdown
Submit to coursera
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
"[email protected]","QH1sshvv6H3WRnWg")
#<EMAIL>,
#<TOKEN>)
###Output
_____no_output_____
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely areare you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0': {
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1': {
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2': {
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s0', 's1', 's2')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s0': 0.7, 's1': 0.1, 's2': 0.2}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
next_possible_states = mdp.get_next_states(state, action)
Q = 0
for next_state, prob in next_possible_states.items():
reward = mdp.get_reward(state, action, next_state)
Q += prob * (reward + gamma * state_values[next_state])
return Q
import numpy as np
test_Vs = {s: i for i, s in enumerate(mdp.get_all_states())}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
possible_actions = mdp.get_possible_actions(state)
Qs = [get_action_value(mdp, state_values, state, action, gamma) for action in possible_actions]
return max(Qs)
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : new_V(state)}
new_state_values = {s: get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | " % (i, diff), end="")
print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
Final state values: {'s0': 8.023123818663871, 's1': 11.163174814980803, 's2': 8.915559364985523}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state):
return None
best_action = None
highest_Q = 0
for action in mdp.get_possible_actions(state):
Q = get_action_value(mdp, state_values, state, action, gamma)
if best_action is None or Q > highest_Q:
highest_Q = Q
best_action = action
return best_action
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert 0.85 < np.mean(rewards) < 1.0
###Output
average reward: 0.928
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : new_V(state)}
new_state_values = {s: get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f " % (i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
*FFF
FHFH
FFFH
HFFG
down
SFFF
*HFH
FFFH
HFFG
down
SFFF
FHFH
*FFH
HFFG
right
SFFF
FHFH
F*FH
HFFG
down
SFFF
FHFH
FFFH
H*FG
right
SFFF
FHFH
FFFH
HF*G
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3, 3))
h, w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w, h), cmap='gray', interpolation='none', clim=(0, 1))
ax = plt.gca()
ax.set_xticks(np.arange(h) - .5)
ax.set_yticks(np.arange(w) - .5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down': (0, -1), 'right': (1, 0), 'up': (-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y, x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y, u * .3, -v * .3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from time import sleep
from IPython.display import clear_output
mdp = FrozenLakeEnv(map_name='8x8', slip_chance=0.1)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
Terminated
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 1.0 <= np.mean(total_rewards) <= 1.0
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.8 <= np.mean(total_rewards) <= 0.95
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.6 <= np.mean(total_rewards) <= 0.7
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert 0.6 <= np.mean(total_rewards) <= 0.8
print("Well done!")
###Output
iter 0 | diff: 0.80000 | V(start): 0.000
iter 1 | diff: 0.57600 | V(start): 0.000
iter 2 | diff: 0.41472 | V(start): 0.000
iter 3 | diff: 0.29860 | V(start): 0.000
iter 4 | diff: 0.24186 | V(start): 0.000
iter 5 | diff: 0.19349 | V(start): 0.000
iter 6 | diff: 0.15325 | V(start): 0.000
iter 7 | diff: 0.12288 | V(start): 0.000
iter 8 | diff: 0.09930 | V(start): 0.000
iter 9 | diff: 0.08037 | V(start): 0.000
iter 10 | diff: 0.06426 | V(start): 0.000
iter 11 | diff: 0.05129 | V(start): 0.000
iter 12 | diff: 0.04330 | V(start): 0.000
iter 13 | diff: 0.03802 | V(start): 0.033
iter 14 | diff: 0.03332 | V(start): 0.058
iter 15 | diff: 0.02910 | V(start): 0.087
iter 16 | diff: 0.01855 | V(start): 0.106
iter 17 | diff: 0.01403 | V(start): 0.120
iter 18 | diff: 0.00810 | V(start): 0.128
iter 19 | diff: 0.00555 | V(start): 0.133
iter 20 | diff: 0.00321 | V(start): 0.137
iter 21 | diff: 0.00247 | V(start): 0.138
iter 22 | diff: 0.00147 | V(start): 0.139
iter 23 | diff: 0.00104 | V(start): 0.140
iter 24 | diff: 0.00058 | V(start): 0.140
iter 25 | diff: 0.00036 | V(start): 0.141
iter 26 | diff: 0.00024 | V(start): 0.141
iter 27 | diff: 0.00018 | V(start): 0.141
iter 28 | diff: 0.00012 | V(start): 0.141
iter 29 | diff: 0.00007 | V(start): 0.141
iter 30 | diff: 0.00004 | V(start): 0.141
iter 31 | diff: 0.00003 | V(start): 0.141
iter 32 | diff: 0.00001 | V(start): 0.141
iter 33 | diff: 0.00001 | V(start): 0.141
Terminated
average reward: 0.748
Well done!
###Markdown
Submit to coursera
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
'[email protected]',
'WfbisWTMXT62ZOhU')
###Output
_____no_output_____
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely areare you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience. For starters, let's define a simple MDP from this picture:_img by MistWiz (Own work) [Public domain], via Wikimedia Commons_
###Code
transition_probs = {
's0':{
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1':{
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2':{
'a0': {'s0': 0.4, 's1': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2':0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s2', 's1', 's0')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s2': 0.2, 's1': 0.1, 's0': 0.7}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
get_next_states = mdp.get_next_states(state, action)
Q = sum([p*(mdp.get_reward(state, action, s) + gamma*state_values[s]) for s,p in get_next_states.items()])
return Q
import numpy as np
test_Vs = {s : i for i, s in enumerate(sorted(mdp.get_all_states()))}
#test_Vs = {'s0':0, 's1':1, 's2':2}
assert np.allclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.allclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as per formula above. Please do not change state_values in process. """
if mdp.is_terminal(state): return 0
new_state_value = sorted([(a, get_action_value(mdp, state_values, state, a, gamma)) for a in mdp.get_possible_actions(state)], key=lambda s: s[1], reverse=True)[0][1]
return new_state_value
test_Vs_vopy = dict(test_Vs)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.allclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 0.69)
assert test_Vs == test_Vs_vopy, "please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
min_difference = 0.001 # stop VI if new values are this close to old values (or closer)
# initialize V(s)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s:get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | "%(i, diff), end="")
print(' '.join("V(%s) = %.3f"%(s, v) for s,v in state_values.items()), end='\n\n')
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
print("Final state values:", state_values)
assert abs(state_values['s0'] - 8.032) < 0.01
assert abs(state_values['s1'] - 11.169) < 0.01
assert abs(state_values['s2'] - 8.921) < 0.01
###Output
Final state values: {'s2': 8.915559364985523, 's1': 11.163174814980799, 's0': 8.023123818663871}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state): return None
opt_action = sorted([(a, get_action_value(mdp, state_values, state, a, gamma))\
for a in mdp.get_possible_actions(state)], key=lambda s: s[1], reverse=True)[0][0]
return opt_action
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a0'
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.85 < np.mean(rewards) < 1.0)
###Output
average reward: 0.924
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
mdp.desc
def value_iteration(mdp, state_values=None, gamma = 0.9, num_iter = 1000, min_difference = 1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s : 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {s:get_new_state_value(mdp, state_values, s, gamma) for s in mdp.get_all_states()}
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s]) for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f "%(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
print("Terminated"); break
return state_values
state_values = value_iteration(mdp)
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done: break
###Output
*FFF
FHFH
FFFH
HFFG
down
SFFF
*HFH
FFFH
HFFG
down
SFFF
FHFH
*FFH
HFFG
right
SFFF
FHFH
F*FH
HFFG
down
SFFF
FHFH
FFFH
H*FG
right
SFFF
FHFH
FFFH
HF*G
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3,3))
h,w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w,h), cmap='gray', interpolation='none', clim=(0,1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down':(0, -1), 'right':(1,0), 'up':(-1, 0)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y,x].item()),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None: continue
u, v = a2uv[a]
plt.arrow(x, y,u*.3, -v*.3, color='m', head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8',slip_chance=0.1)
state_values = {s : 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i"%i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
Terminated
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done: break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
iter 0 | diff: 0.80000 | V(start): 0.000
iter 1 | diff: 0.57600 | V(start): 0.000
iter 2 | diff: 0.41472 | V(start): 0.000
iter 3 | diff: 0.29860 | V(start): 0.000
iter 4 | diff: 0.24186 | V(start): 0.000
iter 5 | diff: 0.19349 | V(start): 0.000
iter 6 | diff: 0.15325 | V(start): 0.000
iter 7 | diff: 0.12288 | V(start): 0.000
iter 8 | diff: 0.09930 | V(start): 0.000
iter 9 | diff: 0.08037 | V(start): 0.000
iter 10 | diff: 0.06426 | V(start): 0.000
iter 11 | diff: 0.05129 | V(start): 0.000
iter 12 | diff: 0.04330 | V(start): 0.000
iter 13 | diff: 0.03802 | V(start): 0.033
iter 14 | diff: 0.03332 | V(start): 0.058
iter 15 | diff: 0.02910 | V(start): 0.087
iter 16 | diff: 0.01855 | V(start): 0.106
iter 17 | diff: 0.01403 | V(start): 0.120
iter 18 | diff: 0.00810 | V(start): 0.128
iter 19 | diff: 0.00555 | V(start): 0.133
iter 20 | diff: 0.00321 | V(start): 0.137
iter 21 | diff: 0.00247 | V(start): 0.138
iter 22 | diff: 0.00147 | V(start): 0.139
iter 23 | diff: 0.00104 | V(start): 0.140
iter 24 | diff: 0.00058 | V(start): 0.140
iter 25 | diff: 0.00036 | V(start): 0.141
iter 26 | diff: 0.00024 | V(start): 0.141
iter 27 | diff: 0.00018 | V(start): 0.141
iter 28 | diff: 0.00012 | V(start): 0.141
iter 29 | diff: 0.00007 | V(start): 0.141
iter 30 | diff: 0.00004 | V(start): 0.141
iter 31 | diff: 0.00003 | V(start): 0.141
iter 32 | diff: 0.00001 | V(start): 0.141
iter 33 | diff: 0.00001 | V(start): 0.141
Terminated
average reward: 0.739
Well done!
###Markdown
Submit to coursera
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
'[email protected]',
'rOwd2RgHZtnySFKb')
###Output
iter 0 | diff: 0.75000 | V(start): 0.000
iter 1 | diff: 0.50625 | V(start): 0.000
iter 2 | diff: 0.39867 | V(start): 0.000
iter 3 | diff: 0.26910 | V(start): 0.000
iter 4 | diff: 0.18164 | V(start): 0.000
iter 5 | diff: 0.14013 | V(start): 0.140
iter 6 | diff: 0.07028 | V(start): 0.199
iter 7 | diff: 0.06030 | V(start): 0.260
iter 8 | diff: 0.02594 | V(start): 0.285
iter 9 | diff: 0.01918 | V(start): 0.305
iter 10 | diff: 0.00858 | V(start): 0.313
iter 11 | diff: 0.00560 | V(start): 0.319
iter 12 | diff: 0.00260 | V(start): 0.321
iter 13 | diff: 0.00159 | V(start): 0.323
iter 14 | diff: 0.00076 | V(start): 0.324
iter 15 | diff: 0.00045 | V(start): 0.324
iter 16 | diff: 0.00022 | V(start): 0.324
iter 17 | diff: 0.00012 | V(start): 0.325
iter 18 | diff: 0.00006 | V(start): 0.325
iter 19 | diff: 0.00003 | V(start): 0.325
iter 20 | diff: 0.00002 | V(start): 0.325
iter 21 | diff: 0.00001 | V(start): 0.325
Terminated
iter 0 | diff: 0.75000 | V(start): 0.000
iter 1 | diff: 0.50625 | V(start): 0.000
iter 2 | diff: 0.34172 | V(start): 0.000
iter 3 | diff: 0.23066 | V(start): 0.000
iter 4 | diff: 0.18164 | V(start): 0.000
iter 5 | diff: 0.14013 | V(start): 0.000
iter 6 | diff: 0.10641 | V(start): 0.000
iter 7 | diff: 0.08247 | V(start): 0.000
iter 8 | diff: 0.06464 | V(start): 0.000
iter 9 | diff: 0.05474 | V(start): 0.000
iter 10 | diff: 0.04729 | V(start): 0.000
iter 11 | diff: 0.04105 | V(start): 0.000
iter 12 | diff: 0.03516 | V(start): 0.000
iter 13 | diff: 0.02994 | V(start): 0.018
iter 14 | diff: 0.02535 | V(start): 0.035
iter 15 | diff: 0.02133 | V(start): 0.056
iter 16 | diff: 0.01610 | V(start): 0.072
iter 17 | diff: 0.01357 | V(start): 0.086
iter 18 | diff: 0.00912 | V(start): 0.095
iter 19 | diff: 0.00674 | V(start): 0.101
iter 20 | diff: 0.00440 | V(start): 0.106
iter 21 | diff: 0.00383 | V(start): 0.109
iter 22 | diff: 0.00252 | V(start): 0.111
iter 23 | diff: 0.00184 | V(start): 0.112
iter 24 | diff: 0.00116 | V(start): 0.113
iter 25 | diff: 0.00078 | V(start): 0.113
iter 26 | diff: 0.00061 | V(start): 0.113
iter 27 | diff: 0.00049 | V(start): 0.114
iter 28 | diff: 0.00037 | V(start): 0.114
iter 29 | diff: 0.00028 | V(start): 0.114
iter 30 | diff: 0.00022 | V(start): 0.114
iter 31 | diff: 0.00015 | V(start): 0.114
iter 32 | diff: 0.00010 | V(start): 0.114
iter 33 | diff: 0.00006 | V(start): 0.114
iter 34 | diff: 0.00004 | V(start): 0.114
iter 35 | diff: 0.00002 | V(start): 0.114
iter 36 | diff: 0.00001 | V(start): 0.114
iter 37 | diff: 0.00001 | V(start): 0.114
Terminated
###Markdown
Markov decision processThis week's methods are all built to solve __M__arkov __D__ecision __P__rocesses. In the broadest sense, an MDP is defined by how it changes states and how rewards are computed.State transition is defined by $P(s' |s,a)$ - how likely are you to end at state $s'$ if you take action $a$ from state $s$. Now there's more than one way to define rewards, but we'll use $r(s,a,s')$ function for convenience._This notebook is inspired by the awesome_ [CS294](https://github.com/berkeleydeeprlcourse/homework/blob/36a0b58261acde756abd55306fbe63df226bf62b/hw2/HW2.ipynb) _by Berkeley_ For starters, let's define a simple MDP from this picture:
###Code
import sys, os
if 'google.colab' in sys.modules and not os.path.exists('.setup_complete'):
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/master/setup_colab.sh -O- | bash
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/grading.py -O ../grading.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week2_model_based/submit.py
!wget -q https://raw.githubusercontent.com/yandexdataschool/Practical_RL/coursera/week2_model_based/mdp.py
!touch .setup_complete
# This code creates a virtual display to draw game images on.
# It will have no effect if your machine has a monitor.
if type(os.environ.get("DISPLAY")) is not str or len(os.environ.get("DISPLAY")) == 0:
!bash ../xvfb start
os.environ['DISPLAY'] = ':1'
transition_probs = {
's0': {
'a0': {'s0': 0.5, 's2': 0.5},
'a1': {'s2': 1}
},
's1': {
'a0': {'s0': 0.7, 's1': 0.1, 's2': 0.2},
'a1': {'s1': 0.95, 's2': 0.05}
},
's2': {
'a0': {'s0': 0.4, 's2': 0.6},
'a1': {'s0': 0.3, 's1': 0.3, 's2': 0.4}
}
}
rewards = {
's1': {'a0': {'s0': +5}},
's2': {'a1': {'s0': -1}}
}
from mdp import MDP
mdp = MDP(transition_probs, rewards, initial_state='s0')
###Output
_____no_output_____
###Markdown
We can now use MDP just as any other gym environment:
###Code
print('initial state =', mdp.reset())
next_state, reward, done, info = mdp.step('a1')
print('next_state = %s, reward = %s, done = %s' % (next_state, reward, done))
###Output
initial state = s0
next_state = s2, reward = 0.0, done = False
###Markdown
but it also has other methods that you'll need for Value Iteration
###Code
print("mdp.get_all_states =", mdp.get_all_states())
print("mdp.get_possible_actions('s1') = ", mdp.get_possible_actions('s1'))
print("mdp.get_next_states('s1', 'a0') = ", mdp.get_next_states('s1', 'a0'))
print("mdp.get_reward('s1', 'a0', 's0') = ", mdp.get_reward('s1', 'a0', 's0'))
print("mdp.get_transition_prob('s1', 'a0', 's0') = ", mdp.get_transition_prob('s1', 'a0', 's0'))
###Output
mdp.get_all_states = ('s0', 's1', 's2')
mdp.get_possible_actions('s1') = ('a0', 'a1')
mdp.get_next_states('s1', 'a0') = {'s0': 0.7, 's1': 0.1, 's2': 0.2}
mdp.get_reward('s1', 'a0', 's0') = 5
mdp.get_transition_prob('s1', 'a0', 's0') = 0.7
###Markdown
Optional: Visualizing MDPsYou can also visualize any MDP with the drawing fuction donated by [neer201](https://github.com/neer201).You have to install graphviz for system and for python. 1. * For ubuntu just run: `sudo apt-get install graphviz` * For OSX: `brew install graphviz`2. `pip install graphviz`3. restart the notebook__Note:__ Installing graphviz on some OS (esp. Windows) may be tricky. However, you can ignore this part alltogether and use the standart vizualization.
###Code
from mdp import has_graphviz
from IPython.display import display
print("Graphviz available:", has_graphviz)
if has_graphviz:
from mdp import plot_graph, plot_graph_with_state_values, plot_graph_optimal_strategy_and_state_values
display(plot_graph(mdp))
###Output
_____no_output_____
###Markdown
Value IterationNow let's build something to solve this MDP. The simplest algorithm so far is __V__alue __I__terationHere's the pseudo-code for VI:---`1.` Initialize $V^{(0)}(s)=0$, for all $s$`2.` For $i=0, 1, 2, \dots$ `3.` $ \quad V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$, for all $s$--- First, let's write a function to compute the state-action value function $Q^{\pi}$, defined as follows$$Q_i(s, a) = \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')]$$
###Code
{s: i for i, s in enumerate(sorted(mdp.get_all_states()))}
def get_action_value(mdp, state_values, state, action, gamma):
""" Computes Q(s,a) as in formula above """
q = 0
for s, p in mdp.get_next_states(state,action).items():
q+=p*(mdp.get_reward(state,action,s) + gamma * state_values[s])
return q
import numpy as np
test_Vs = {s: i for i, s in enumerate(sorted(mdp.get_all_states()))}
assert np.isclose(get_action_value(mdp, test_Vs, 's2', 'a1', 0.9), 0.69)
assert np.isclose(get_action_value(mdp, test_Vs, 's1', 'a0', 0.9), 3.95)
###Output
_____no_output_____
###Markdown
Using $Q(s,a)$ we can now define the "next" V(s) for value iteration. $$V_{(i+1)}(s) = \max_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = \max_a Q_i(s,a)$$
###Code
def get_new_state_value(mdp, state_values, state, gamma):
""" Computes next V(s) as in formula above. Please do not change state_values in process. """
if mdp.is_terminal(state):
return 0
max_v = []
for a in mdp.get_possible_actions(state):
q_a = get_action_value(mdp,state_values,state,a,gamma)
max_v.append(q_a)
return max(max_v)
test_Vs_copy = dict(test_Vs)
assert np.isclose(get_new_state_value(mdp, test_Vs, 's0', 0.9), 1.8)
assert np.isclose(get_new_state_value(mdp, test_Vs, 's2', 0.9), 1.08)
assert np.isclose(get_new_state_value(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9), -13500000000.0), \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
assert test_Vs == test_Vs_copy, "Please do not change state_values in get_new_state_value"
###Output
_____no_output_____
###Markdown
Finally, let's combine everything we wrote into a working value iteration algo.
###Code
# parameters
gamma = 0.9 # discount for MDP
num_iter = 100 # maximum iterations, excluding initialization
# stop VI if new values are this close to old values (or closer)
min_difference = 0.001
# initialize V(s)
state_values = {s: 0 for s in mdp.get_all_states()}
if has_graphviz:
display(plot_graph_with_state_values(mdp, state_values))
for i in range(num_iter):
# Compute new state values using the functions you defined above.
# It must be a dict {state : float V_new(state)}
new_state_values = {}
for s in state_values.keys():
new_state_values[s] = get_new_state_value(mdp,state_values,s,gamma)
# print(new_state_values)
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s])
for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | " % (i, diff), end="")
print(' '.join("V(%s) = %.3f" % (s, v) for s, v in state_values.items()))
state_values = new_state_values
if diff < min_difference:
print("Terminated")
break
if has_graphviz:
display(plot_graph_with_state_values(mdp, state_values))
print("Final state values:", state_values)
assert abs(state_values['s0'] - 3.781) < 0.01
assert abs(state_values['s1'] - 7.294) < 0.01
assert abs(state_values['s2'] - 4.202) < 0.01
###Output
Final state values: {'s0': 3.7810348735476405, 's1': 7.294006423867229, 's2': 4.202140275227048}
###Markdown
Now let's use those $V^{*}(s)$ to find optimal actions in each state $$\pi^*(s) = argmax_a \sum_{s'} P(s' | s,a) \cdot [ r(s,a,s') + \gamma V_{i}(s')] = argmax_a Q_i(s,a)$$ The only difference vs V(s) is that here we take not max but argmax: find action such with maximum Q(s,a).
###Code
def get_optimal_action(mdp, state_values, state, gamma=0.9):
""" Finds optimal action using formula above. """
if mdp.is_terminal(state):
return None
max_v = None
action = None
for a in mdp.get_possible_actions(state):
q_a = get_action_value(mdp,state_values,state,a,gamma)
if max_v is None or max_v < q_a:
max_v = q_a
action = a
return action
assert get_optimal_action(mdp, state_values, 's0', gamma) == 'a1'
assert get_optimal_action(mdp, state_values, 's1', gamma) == 'a0'
assert get_optimal_action(mdp, state_values, 's2', gamma) == 'a1'
assert get_optimal_action(mdp, {'s0': -1e10, 's1': 0, 's2': -2e10}, 's0', 0.9) == 'a0', \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
assert get_optimal_action(mdp, {'s0': -2e10, 's1': 0, 's2': -1e10}, 's0', 0.9) == 'a1', \
"Please ensure that you handle negative Q-values of arbitrary magnitude correctly"
if has_graphviz:
display(plot_graph_optimal_strategy_and_state_values(mdp, state_values, get_action_value))
# Measure agent's average reward
s = mdp.reset()
rewards = []
for _ in range(10000):
s, r, done, _ = mdp.step(get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
print("average reward: ", np.mean(rewards))
assert(0.40 < np.mean(rewards) < 0.55)
###Output
average reward: 0.4672
###Markdown
Frozen lake
###Code
from mdp import FrozenLakeEnv
mdp = FrozenLakeEnv(slip_chance=0)
mdp.render()
def value_iteration(mdp, state_values=None, gamma=0.9, num_iter=1000, min_difference=1e-5):
""" performs num_iter value iteration steps starting from state_values. Same as before but in a function """
state_values = state_values or {s: 0 for s in mdp.get_all_states()}
for i in range(num_iter):
# Compute new state values using the functions you defined above. It must be a dict {state : new_V(state)}
new_state_values = {}
for s in state_values.keys():
new_state_values[s] = get_new_state_value(mdp,state_values,s,gamma)
assert isinstance(new_state_values, dict)
# Compute difference
diff = max(abs(new_state_values[s] - state_values[s])
for s in mdp.get_all_states())
print("iter %4i | diff: %6.5f | V(start): %.3f " %
(i, diff, new_state_values[mdp._initial_state]))
state_values = new_state_values
if diff < min_difference:
break
return state_values
state_values = value_iteration(mdp)
f"{state_values[(3,2)]:.1f}"
s = mdp.reset()
mdp.render()
for t in range(100):
a = get_optimal_action(mdp, state_values, s, gamma)
print(a, end='\n\n')
s, r, done, _ = mdp.step(a)
mdp.render()
if done:
break
###Output
*FFF
FHFH
FFFH
HFFG
down
SFFF
*HFH
FFFH
HFFG
down
SFFF
FHFH
*FFH
HFFG
right
SFFF
FHFH
F*FH
HFFG
down
SFFF
FHFH
FFFH
H*FG
right
SFFF
FHFH
FFFH
HF*G
right
SFFF
FHFH
FFFH
HFF*
###Markdown
Let's visualize!It's usually interesting to see what your algorithm actually learned under the hood. To do so, we'll plot state value functions and optimal actions at each VI step.
###Code
import matplotlib.pyplot as plt
%matplotlib inline
def draw_policy(mdp, state_values):
plt.figure(figsize=(3, 3))
h, w = mdp.desc.shape
states = sorted(mdp.get_all_states())
V = np.array([state_values[s] for s in states])
Pi = {s: get_optimal_action(mdp, state_values, s, gamma) for s in states}
plt.imshow(V.reshape(w, h), cmap='gray', interpolation='none', clim=(0, 1))
ax = plt.gca()
ax.set_xticks(np.arange(h)-.5)
ax.set_yticks(np.arange(w)-.5)
ax.set_xticklabels([])
ax.set_yticklabels([])
# Y, X = np.mgrid[0:4, 0:4]
a2uv = {'left': (-1, 0), 'down': (0, -1), 'right': (1, 0), 'up': (0, 1)}
for y in range(h):
for x in range(w):
plt.text(x, y, str(mdp.desc[y, x].item()+":"+f"{state_values[(x,y)]:.1f}"),
color='g', size=12, verticalalignment='center',
horizontalalignment='center', fontweight='bold')
a = Pi[y, x]
if a is None:
continue
u, v = a2uv[a]
plt.arrow(x, y, u*.3, -v*.3, color='m',
head_width=0.1, head_length=0.1)
plt.grid(color='b', lw=2, ls='-')
plt.show()
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(10):
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
# please ignore iter 0 at each step
from IPython.display import clear_output
from time import sleep
mdp = FrozenLakeEnv(map_name='8x8', slip_chance=0.1)
state_values = {s: 0 for s in mdp.get_all_states()}
for i in range(30):
clear_output(True)
print("after iteration %i" % i)
state_values = value_iteration(mdp, state_values, num_iter=1)
draw_policy(mdp, state_values)
sleep(0.5)
# please ignore iter 0 at each step
###Output
after iteration 29
iter 0 | diff: 0.00000 | V(start): 0.198
###Markdown
Massive tests
###Code
mdp = FrozenLakeEnv(slip_chance=0)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(1.0 <= np.mean(total_rewards) <= 1.0)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.1)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.8 <= np.mean(total_rewards) <= 0.95)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.25)
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.7)
print("Well done!")
# Measure agent's average reward
mdp = FrozenLakeEnv(slip_chance=0.2, map_name='8x8')
state_values = value_iteration(mdp)
total_rewards = []
for game_i in range(1000):
s = mdp.reset()
rewards = []
for t in range(100):
s, r, done, _ = mdp.step(
get_optimal_action(mdp, state_values, s, gamma))
rewards.append(r)
if done:
break
total_rewards.append(np.sum(rewards))
print("average reward: ", np.mean(total_rewards))
assert(0.6 <= np.mean(total_rewards) <= 0.8)
print("Well done!")
###Output
iter 0 | diff: 0.80000 | V(start): 0.000
iter 1 | diff: 0.57600 | V(start): 0.000
iter 2 | diff: 0.41472 | V(start): 0.000
iter 3 | diff: 0.29860 | V(start): 0.000
iter 4 | diff: 0.24186 | V(start): 0.000
iter 5 | diff: 0.19349 | V(start): 0.000
iter 6 | diff: 0.15325 | V(start): 0.000
iter 7 | diff: 0.12288 | V(start): 0.000
iter 8 | diff: 0.09930 | V(start): 0.000
iter 9 | diff: 0.08037 | V(start): 0.000
iter 10 | diff: 0.06426 | V(start): 0.000
iter 11 | diff: 0.05129 | V(start): 0.000
iter 12 | diff: 0.04330 | V(start): 0.000
iter 13 | diff: 0.03802 | V(start): 0.033
iter 14 | diff: 0.03332 | V(start): 0.058
iter 15 | diff: 0.02910 | V(start): 0.087
iter 16 | diff: 0.01855 | V(start): 0.106
iter 17 | diff: 0.01403 | V(start): 0.120
iter 18 | diff: 0.00810 | V(start): 0.128
iter 19 | diff: 0.00555 | V(start): 0.133
iter 20 | diff: 0.00321 | V(start): 0.137
iter 21 | diff: 0.00247 | V(start): 0.138
iter 22 | diff: 0.00147 | V(start): 0.139
iter 23 | diff: 0.00104 | V(start): 0.140
iter 24 | diff: 0.00058 | V(start): 0.140
iter 25 | diff: 0.00036 | V(start): 0.141
iter 26 | diff: 0.00024 | V(start): 0.141
iter 27 | diff: 0.00018 | V(start): 0.141
iter 28 | diff: 0.00012 | V(start): 0.141
iter 29 | diff: 0.00007 | V(start): 0.141
iter 30 | diff: 0.00004 | V(start): 0.141
iter 31 | diff: 0.00003 | V(start): 0.141
iter 32 | diff: 0.00001 | V(start): 0.141
iter 33 | diff: 0.00001 | V(start): 0.141
average reward: 0.729
Well done!
###Markdown
Submit to courseraIf your submission doesn't finish in 30 seconds, set `verbose=True` and try again.
###Code
from submit import submit_assigment
submit_assigment(
get_action_value,
get_new_state_value,
get_optimal_action,
value_iteration,
'[email protected]',
'syIAF2jgbRAd7VGB',
verbose=False,
)
###Output
Submitted to Coursera platform. See results on assignment page!
|
colabs/bigquery_census_correlate.ipynb | ###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Census Data Correlation ParametersCorrelate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table. 1. Pre-requisite is Census Normalize, run that at least once. 1. Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query. 1. Define the DATASET and TABLE for the joinable source. Can be a view. 1. Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value. 1. Specify where to write the results. 1. IMPORTANT: If you use VIEWS, you will have to delete them manually if the recipe changes.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth': 'service', # Credentials used for writing data.
'join': '', # Name of column to join on, must match Census Geo_Id column.
'pass': [], # Comma seperated list of columns to pass through.
'sum': [], # Comma seperated list of columns to sum, optional.
'correlate': [], # Comma seperated list of percentage columns to correlate.
'from_dataset': '', # Existing BigQuery dataset.
'from_table': '', # Table to use as join data.
'significance': '80', # Select level of significance to test.
'to_dataset': '', # Existing BigQuery dataset.
'type': 'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Census Data CorrelationThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'census': {
'auth': 'user',
'correlate': {
'join': {'field': {'name': 'join','kind': 'string','order': 1,'default': '','description': 'Name of column to join on, must match Census Geo_Id column.'}},
'pass': {'field': {'name': 'pass','kind': 'string_list','order': 2,'default': [],'description': 'Comma seperated list of columns to pass through.'}},
'sum': {'field': {'name': 'sum','kind': 'string_list','order': 3,'default': [],'description': 'Comma seperated list of columns to sum, optional.'}},
'correlate': {'field': {'name': 'correlate','kind': 'string_list','order': 4,'default': [],'description': 'Comma seperated list of percentage columns to correlate.'}},
'dataset': {'field': {'name': 'from_dataset','kind': 'string','order': 5,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'from_table','kind': 'string','order': 6,'default': '','description': 'Table to use as join data.'}},
'significance': {'field': {'name': 'significance','kind': 'choice','order': 7,'default': '80','description': 'Select level of significance to test.','choices': ['80','90','98','99','99.5','99.95']}}
},
'to': {
'dataset': {'field': {'name': 'to_dataset','kind': 'string','order': 9,'default': '','description': 'Existing BigQuery dataset.'}},
'type': {'field': {'name': 'type','kind': 'choice','order': 10,'default': 'table','description': 'Write Census_Percent as table or view.','choices': ['table','view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Census Data Correlation ParametersCorrelate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table. 1. Pre-requisite is Census Normalize, run that at least once. 1. Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query. 1. Define the DATASET and TABLE for the joinable source. Can be a view. 1. Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value. 1. Specify where to write the results. 1. IMPORTANT: If you use VIEWS, you will have to delete them manually if the recipe changes.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth': 'service', # Credentials used for writing data.
'join': '', # Name of column to join on, must match Census Geo_Id column.
'pass': [], # Comma seperated list of columns to pass through.
'sum': [], # Comma seperated list of columns to sum, optional.
'correlate': [], # Comma seperated list of percentage columns to correlate.
'from_dataset': '', # Existing BigQuery dataset.
'from_table': '', # Table to use as join data.
'significance': '80', # Select level of significance to test.
'to_dataset': '', # Existing BigQuery dataset.
'type': 'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Census Data CorrelationThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'census': {
'auth': 'user',
'correlate': {
'join': {'field': {'name': 'join','kind': 'string','order': 1,'default': '','description': 'Name of column to join on, must match Census Geo_Id column.'}},
'pass': {'field': {'name': 'pass','kind': 'string_list','order': 2,'default': [],'description': 'Comma seperated list of columns to pass through.'}},
'sum': {'field': {'name': 'sum','kind': 'string_list','order': 3,'default': [],'description': 'Comma seperated list of columns to sum, optional.'}},
'correlate': {'field': {'name': 'correlate','kind': 'string_list','order': 4,'default': [],'description': 'Comma seperated list of percentage columns to correlate.'}},
'dataset': {'field': {'name': 'from_dataset','kind': 'string','order': 5,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'from_table','kind': 'string','order': 6,'default': '','description': 'Table to use as join data.'}},
'significance': {'field': {'name': 'significance','kind': 'choice','order': 7,'default': '80','description': 'Select level of significance to test.','choices': ['80','90','98','99','99.5','99.95']}}
},
'to': {
'dataset': {'field': {'name': 'to_dataset','kind': 'string','order': 9,'default': '','description': 'Existing BigQuery dataset.'}},
'type': {'field': {'name': 'type','kind': 'choice','order': 10,'default': 'table','description': 'Write Census_Percent as table or view.','choices': ['table','view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Census Data Correlation ParametersCorrelate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table. 1. Pre-requisite is Census Normalize, run that at least once. 1. Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query. 1. Define the DATASET and TABLE for the joinable source. Can be a view. 1. Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value. 1. Specify where to write the results. 1. IMPORTANT: If you use VIEWS, you will have to delete them manually if the recipe changes.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth': 'service', # Credentials used for writing data.
'join': '', # Name of column to join on, must match Census Geo_Id column.
'pass': [], # Comma seperated list of columns to pass through.
'sum': [], # Comma seperated list of columns to sum, optional.
'correlate': [], # Comma seperated list of percentage columns to correlate.
'from_dataset': '', # Existing BigQuery dataset.
'from_table': '', # Table to use as join data.
'significance': '80', # Select level of significance to test.
'to_dataset': '', # Existing BigQuery dataset.
'type': 'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Census Data CorrelationThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'census': {
'auth': 'user',
'correlate': {
'join': {'field': {'name': 'join','kind': 'string','order': 1,'default': '','description': 'Name of column to join on, must match Census Geo_Id column.'}},
'pass': {'field': {'name': 'pass','kind': 'string_list','order': 2,'default': [],'description': 'Comma seperated list of columns to pass through.'}},
'sum': {'field': {'name': 'sum','kind': 'string_list','order': 3,'default': [],'description': 'Comma seperated list of columns to sum, optional.'}},
'correlate': {'field': {'name': 'correlate','kind': 'string_list','order': 4,'default': [],'description': 'Comma seperated list of percentage columns to correlate.'}},
'dataset': {'field': {'name': 'from_dataset','kind': 'string','order': 5,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'from_table','kind': 'string','order': 6,'default': '','description': 'Table to use as join data.'}},
'significance': {'field': {'name': 'significance','kind': 'choice','order': 7,'default': '80','description': 'Select level of significance to test.','choices': ['80','90','98','99','99.5','99.95']}}
},
'to': {
'dataset': {'field': {'name': 'to_dataset','kind': 'string','order': 9,'default': '','description': 'Existing BigQuery dataset.'}},
'type': {'field': {'name': 'type','kind': 'choice','order': 10,'default': 'table','description': 'Write Census_Percent as table or view.','choices': ['table','view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Census Data Correlation ParametersCorrelate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table. 1. Pre-requisite is Census Normalize, run that at least once. 1. Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query. 1. Define the DATASET and TABLE for the joinable source. Can be a view. 1. Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value. 1. Specify where to write the results. 1. IMPORTANT: If you use VIEWS, you will have to delete them manually if the recipe changes.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth': 'service', # Credentials used for writing data.
'join': '', # Name of column to join on, must match Census Geo_Id column.
'pass': [], # Comma seperated list of columns to pass through.
'sum': [], # Comma seperated list of columns to sum, optional.
'correlate': [], # Comma seperated list of percentage columns to correlate.
'from_dataset': '', # Existing BigQuery dataset.
'from_table': '', # Table to use as join data.
'significance': '80', # Select level of significance to test.
'to_dataset': '', # Existing BigQuery dataset.
'type': 'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Census Data CorrelationThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import commandline_parser
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'census': {
'auth': 'user',
'correlate': {
'join': {'field': {'name': 'join','kind': 'string','order': 1,'default': '','description': 'Name of column to join on, must match Census Geo_Id column.'}},
'pass': {'field': {'name': 'pass','kind': 'string_list','order': 2,'default': [],'description': 'Comma seperated list of columns to pass through.'}},
'sum': {'field': {'name': 'sum','kind': 'string_list','order': 3,'default': [],'description': 'Comma seperated list of columns to sum, optional.'}},
'correlate': {'field': {'name': 'correlate','kind': 'string_list','order': 4,'default': [],'description': 'Comma seperated list of percentage columns to correlate.'}},
'dataset': {'field': {'name': 'from_dataset','kind': 'string','order': 5,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'from_table','kind': 'string','order': 6,'default': '','description': 'Table to use as join data.'}},
'significance': {'field': {'name': 'significance','kind': 'choice','order': 7,'default': '80','description': 'Select level of significance to test.','choices': ['80','90','98','99','99.5','99.95']}}
},
'to': {
'dataset': {'field': {'name': 'to_dataset','kind': 'string','order': 9,'default': '','description': 'Existing BigQuery dataset.'}},
'type': {'field': {'name': 'type','kind': 'choice','order': 10,'default': 'table','description': 'Write Census_Percent as table or view.','choices': ['table','view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CLIENT CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Census Data Correlation ParametersCorrelate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table. 1. Pre-requisite is Census Normalize, run that at least once. 1. Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query. 1. Define the DATASET and TABLE for the joinable source. Can be a view. 1. Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value. 1. Specify where to write the results. 1. IMPORTANT: If you use VIEWS, you will have to delete them manually if the recipe changes.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth': 'service', # Credentials used for writing data.
'join': '', # Name of column to join on, must match Census Geo_Id column.
'pass': [], # Comma seperated list of columns to pass through.
'sum': [], # Comma seperated list of columns to sum, optional.
'correlate': [], # Comma seperated list of percentage columns to correlate.
'from_dataset': '', # Existing BigQuery dataset.
'from_table': '', # Table to use as join data.
'significance': '80', # Select level of significance to test.
'to_dataset': '', # Existing BigQuery dataset.
'type': 'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Census Data CorrelationThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import Configuration
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'census': {
'auth': 'user',
'correlate': {
'join': {'field': {'name': 'join','kind': 'string','order': 1,'default': '','description': 'Name of column to join on, must match Census Geo_Id column.'}},
'pass': {'field': {'name': 'pass','kind': 'string_list','order': 2,'default': [],'description': 'Comma seperated list of columns to pass through.'}},
'sum': {'field': {'name': 'sum','kind': 'string_list','order': 3,'default': [],'description': 'Comma seperated list of columns to sum, optional.'}},
'correlate': {'field': {'name': 'correlate','kind': 'string_list','order': 4,'default': [],'description': 'Comma seperated list of percentage columns to correlate.'}},
'dataset': {'field': {'name': 'from_dataset','kind': 'string','order': 5,'default': '','description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'from_table','kind': 'string','order': 6,'default': '','description': 'Table to use as join data.'}},
'significance': {'field': {'name': 'significance','kind': 'choice','order': 7,'default': '80','description': 'Select level of significance to test.','choices': ['80','90','98','99','99.5','99.95']}}
},
'to': {
'dataset': {'field': {'name': 'to_dataset','kind': 'string','order': 9,'default': '','description': 'Existing BigQuery dataset.'}},
'type': {'field': {'name': 'type','kind': 'choice','order': 10,'default': 'table','description': 'Write Census_Percent as table or view.','choices': ['table','view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(Configuration(project=CLOUD_PROJECT, client=CLIENT_CREDENTIALS, user=USER_CREDENTIALS, verbose=True), TASKS, force=True)
###Output
_____no_output_____
###Markdown
Census Data CorrelationCorrelate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table. LicenseCopyright 2020 Google LLC,Licensed under the Apache License, Version 2.0 (the "License");you may not use this file except in compliance with the License.You may obtain a copy of the License at https://www.apache.org/licenses/LICENSE-2.0Unless required by applicable law or agreed to in writing, softwaredistributed under the License is distributed on an "AS IS" BASIS,WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.See the License for the specific language governing permissions andlimitations under the License. DisclaimerThis is not an officially supported Google product. It is a reference implementation. There is absolutely NO WARRANTY provided for using this code. The code is Apache Licensed and CAN BE fully modified, white labeled, and disassembled by your team.This code generated (see starthinker/scripts for possible source): - **Command**: "python starthinker_ui/manage.py colab" - **Command**: "python starthinker/tools/colab.py [JSON RECIPE]" 1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Set ConfigurationThis code is required to initialize the project. Fill in required fields and press play.1. If the recipe uses a Google Cloud Project: - Set the configuration **project** value to the project identifier from [these instructions](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md).1. If the recipe has **auth** set to **user**: - If you have user credentials: - Set the configuration **user** value to your user credentials JSON. - If you DO NOT have user credentials: - Set the configuration **client** value to [downloaded client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md).1. If the recipe has **auth** set to **service**: - Set the configuration **service** value to [downloaded service credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_service.md).
###Code
from starthinker.util.configuration import Configuration
CONFIG = Configuration(
project="",
client={},
service={},
user="/content/user.json",
verbose=True
)
###Output
_____no_output_____
###Markdown
3. Enter Census Data Correlation Recipe Parameters 1. Pre-requisite is Census Normalize, run that at least once. 1. Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query. 1. Define the DATASET and TABLE for the joinable source. Can be a view. 1. Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value. 1. Specify where to write the results. 1. IMPORTANT: If you use VIEWS, you will have to delete them manually if the recipe changes.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth': 'service', # Credentials used for writing data.
'join': '', # Name of column to join on, must match Census Geo_Id column.
'pass': [], # Comma seperated list of columns to pass through.
'sum': [], # Comma seperated list of columns to sum, optional.
'correlate': [], # Comma seperated list of percentage columns to correlate.
'from_dataset': '', # Existing BigQuery dataset.
'from_table': '', # Table to use as join data.
'significance': '80', # Select level of significance to test.
'to_dataset': '', # Existing BigQuery dataset.
'type': 'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
4. Execute Census Data CorrelationThis does NOT need to be modified unless you are changing the recipe, click play.
###Code
from starthinker.util.configuration import execute
from starthinker.util.recipe import json_set_fields
TASKS = [
{
'census': {
'auth': 'user',
'correlate': {
'join': {'field': {'name': 'join', 'kind': 'string', 'order': 1, 'default': '', 'description': 'Name of column to join on, must match Census Geo_Id column.'}},
'pass': {'field': {'name': 'pass', 'kind': 'string_list', 'order': 2, 'default': [], 'description': 'Comma seperated list of columns to pass through.'}},
'sum': {'field': {'name': 'sum', 'kind': 'string_list', 'order': 3, 'default': [], 'description': 'Comma seperated list of columns to sum, optional.'}},
'correlate': {'field': {'name': 'correlate', 'kind': 'string_list', 'order': 4, 'default': [], 'description': 'Comma seperated list of percentage columns to correlate.'}},
'dataset': {'field': {'name': 'from_dataset', 'kind': 'string', 'order': 5, 'default': '', 'description': 'Existing BigQuery dataset.'}},
'table': {'field': {'name': 'from_table', 'kind': 'string', 'order': 6, 'default': '', 'description': 'Table to use as join data.'}},
'significance': {'field': {'name': 'significance', 'kind': 'choice', 'order': 7, 'default': '80', 'description': 'Select level of significance to test.', 'choices': ['80', '90', '98', '99', '99.5', '99.95']}}
},
'to': {
'dataset': {'field': {'name': 'to_dataset', 'kind': 'string', 'order': 9, 'default': '', 'description': 'Existing BigQuery dataset.'}},
'type': {'field': {'name': 'type', 'kind': 'choice', 'order': 10, 'default': 'table', 'description': 'Write Census_Percent as table or view.', 'choices': ['table', 'view']}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
execute(CONFIG, TASKS, force=True)
###Output
_____no_output_____
###Markdown
1. Install DependenciesFirst install the libraries needed to execute recipes, this only needs to be done once, then click play.
###Code
!pip install git+https://github.com/google/starthinker
###Output
_____no_output_____
###Markdown
2. Get Cloud Project IDTo run this recipe [requires a Google Cloud Project](https://github.com/google/starthinker/blob/master/tutorials/cloud_project.md), this only needs to be done once, then click play.
###Code
CLOUD_PROJECT = 'PASTE PROJECT ID HERE'
print("Cloud Project Set To: %s" % CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
3. Get Client CredentialsTo read and write to various endpoints requires [downloading client credentials](https://github.com/google/starthinker/blob/master/tutorials/cloud_client_installed.md), this only needs to be done once, then click play.
###Code
CLIENT_CREDENTIALS = 'PASTE CREDENTIALS HERE'
print("Client Credentials Set To: %s" % CLIENT_CREDENTIALS)
###Output
_____no_output_____
###Markdown
4. Enter Census Data Correlation ParametersCorrelate another table with US Census data. Expands a data set dimensions by finding population segments that correlate with the master table. 1. Pre-requisite is Census Normalize, run that at least once. 1. Specify JOIN, PASS, SUM, and CORRELATE columns to build the correlation query. 1. Define the DATASET and TABLE for the joinable source. Can be a view. 1. Choose the significance level. More significance usually means more NULL results, balance quantity and quality using this value. 1. Specify where to write the results. 1. IMPORTANT: If you use VIEWS, you will have to delete them manually if the recipe changes.Modify the values below for your use case, can be done multiple times, then click play.
###Code
FIELDS = {
'auth': 'service', # Credentials used for writing data.
'join': '', # Name of column to join on, must match Census Geo_Id column.
'pass': [], # Comma seperated list of columns to pass through.
'sum': [], # Comma seperated list of columns to sum, optional.
'correlate': [], # Comma seperated list of percentage columns to correlate.
'from_dataset': '', # Existing BigQuery dataset.
'from_table': '', # Table to use as join data.
'significance': '80', # Select level of significance to test.
'to_dataset': '', # Existing BigQuery dataset.
'type': 'table', # Write Census_Percent as table or view.
}
print("Parameters Set To: %s" % FIELDS)
###Output
_____no_output_____
###Markdown
5. Execute Census Data CorrelationThis does NOT need to be modified unles you are changing the recipe, click play.
###Code
from starthinker.util.project import project
from starthinker.script.parse import json_set_fields
USER_CREDENTIALS = '/content/user.json'
TASKS = [
{
'census': {
'auth': 'user',
'correlate': {
'table': {'field': {'description': 'Table to use as join data.','name': 'from_table','order': 6,'default': '','kind': 'string'}},
'dataset': {'field': {'description': 'Existing BigQuery dataset.','name': 'from_dataset','order': 5,'default': '','kind': 'string'}},
'pass': {'field': {'description': 'Comma seperated list of columns to pass through.','name': 'pass','order': 2,'default': [],'kind': 'string_list'}},
'correlate': {'field': {'description': 'Comma seperated list of percentage columns to correlate.','name': 'correlate','order': 4,'default': [],'kind': 'string_list'}},
'join': {'field': {'description': 'Name of column to join on, must match Census Geo_Id column.','name': 'join','order': 1,'default': '','kind': 'string'}},
'significance': {'field': {'kind': 'choice','order': 7,'choices': ['80','90','98','99','99.5','99.95'],'description': 'Select level of significance to test.','default': '80','name': 'significance'}},
'sum': {'field': {'description': 'Comma seperated list of columns to sum, optional.','name': 'sum','order': 3,'default': [],'kind': 'string_list'}}
},
'to': {
'type': {'field': {'kind': 'choice','order': 10,'choices': ['table','view'],'description': 'Write Census_Percent as table or view.','default': 'table','name': 'type'}},
'dataset': {'field': {'description': 'Existing BigQuery dataset.','name': 'to_dataset','order': 9,'default': '','kind': 'string'}}
}
}
}
]
json_set_fields(TASKS, FIELDS)
project.initialize(_recipe={ 'tasks':TASKS }, _project=CLOUD_PROJECT, _user=USER_CREDENTIALS, _client=CLIENT_CREDENTIALS, _verbose=True, _force=True)
project.execute(_force=True)
###Output
_____no_output_____ |
python_ds.ipynb | ###Markdown
Python in Data Science*Prepared by:* **Jude Michael Teves** In this notebook, you will be introduced to the data science library trio: numpy, pandas, and matplotlib. These libraries power almost all data science tasks as they are the backbone of many libraries used in data science. We will be importing them in the following cell.
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
NumpyNumpy (Numerical Python) is an open-source library in Python for performing scientific computations. It lets us work with arrays and matrices in a more natural way unlike lists, wherein we have to loop through individual elements to perform a numerical operation.As a refresher, here are basic descriptions of arrays and matrices: - Arrays are simply a collection of values of same type indexed by integers--think of list - Matrices are defined to be multi-dimensional array indexed by rows, columns and dimensions--think of nested listsWhen doing mathematical operations, usage of Numpy library is highly recommended because it is designed with high performance in mind--Numpy is largely written in C which makes computations much faster than just using Python code. In addition, Numpy arrays are stored more efficiently than an equivalent data structure in Python such as lists and arrays. Numpy is a third-party module, which means it is not part of Python's suite of built-in libraries. Here are some important notes on numpy arrays: - all elements in a numpy array must be of the same type. - the size cannot be changed once construced. - supports “vectorized” operations such as element-wise addition and multiplication. Initializing an arrayWe simply plug in an iterable inside `np.array()`
###Code
arr = np.array([1, 2, 3, 4])
arr
###Output
_____no_output_____
###Markdown
We can also create a numpy array through `np.arange()`If only a single argument is passed--let's call this `n1`, it creates an array of size `n1` starting from 0 to `n1`-1. If two arguments (`n1` and `n2`) are passed, it creates an array starting from `n1` to `n2`-1.
###Code
np.arange(5, dtype=float)
np.arange(2, 5, dtype=float)
###Output
_____no_output_____
###Markdown
Numpy Attributes Numpy has built-in attributes that we can use. Here are some of them: - ndarray.ndim - number of axes or dimensions of the array. - ndarray.shape - the dimension of the array--a tuple of integers indicating the size of the array in each dimension. - ndarray.dtype - the type of the elements in the array. Numpy provides its own `int16`, `int32`, `float64` data types, among others. - ndarray.itemsize - size in bytes of each element of the array. For example an array of elements of type `float64` has itemsize of $\frac{64}{8} = 8$ and `complex32` has item size of $\frac{32}{8} = 4$.
###Code
print('Type: ',type(arr))
print('Shape: ',arr.shape)
print('Dimension: ',arr.ndim)
print('Itemsize: ',arr.itemsize)
print('Size: ',arr.size)
###Output
Type: <class 'numpy.ndarray'>
Shape: (4,)
Dimension: 1
Itemsize: 4
Size: 4
###Markdown
Accessing and Manipulating ArraysNumpy allows us to do manipulations on an array/matrix.**Indexing** and **Slicing**This is similar to how you index/slice a list.
###Code
arr = np.arange(3, 10)
arr
arr[6]
arr[:4]
###Output
_____no_output_____
###Markdown
We can also indicate the number of steps by adding another colon `:` and an integer number after the slice syntax.
###Code
arr[:4:2]
###Output
_____no_output_____
###Markdown
Arithmetic OperationsWe can perform arithmetic operations using on Numpy matrices like in linear algebra. Be careful of the dimensions! Make sure that there is no mismatch for a particular operation that you will be using.
###Code
arr1 = np.arange(9).reshape((3,3))
arr2 = np.ones(9).reshape((3,3))
arr1 + arr2
arr1 - arr2
arr1 * arr2 # note that this is an element-wise multiplication
arr1 / arr2 # note that this is an element-wise division
###Output
_____no_output_____
###Markdown
To do a proper matrix multiplication, we use the `np.dot` method.
###Code
np.dot(arr1, arr2)
###Output
_____no_output_____
###Markdown
Logical OperatorsWe have a numpy methods for doing the following logical operators: `or`, `and`, and `not`.
###Code
np.logical_or([True, False, False], [True, True, False])
np.logical_and([True, False, False], [True, True, False])
np.logical_not([True, False])
###Output
_____no_output_____
###Markdown
Aggregation methodsWe can use methods like sum, max, min, and std.
###Code
arr = np.arange(12).reshape((4,3))
arr
arr.sum()
###Output
_____no_output_____
###Markdown
We can also specify which dimension to use for the aggregation.
###Code
arr.sum(axis=0)
arr.sum(axis=1)
arr.max()
arr.max(axis=0)
arr.max(axis=1)
arr.min()
arr.std()
arr.std(axis=1)
np.mean(arr, axis=0)
###Output
_____no_output_____
###Markdown
PandasPandas is an easy-to-use, fast, flexible, and powerful open-source Python library for working with “relational” or “labeled” data. It aims to be the fundamental high-level building block for doing practical, real world data analysis in Python by offering data structures and operations for manipulating tables. And, like Numpy, a Pandas implementation is faster than a default Python ones, and it is a third-party module, which means it is not part of Python's suite of built-in libraries. Pandas is a very big topic and it has so many features that cannot be tacked in this module. You can do further studying by reading the official documentation: https://pandas.pydata.org/pandas-docs/stable/user_guide/dsintro.html. **Data Strucures**Pandas has 2 data structures: `Series` and `Dataframe`. These 2 give us many features that make it easy to do data analysis. SeriesPandas Series is a one-dimensional array-like object that has index and value just like NumPy and is capable of holding any data type. Creating SeriesYou can input a list, numpy array, and dictionary to create a Series. Here are some examples of showcasing those.
###Code
s = pd.Series(np.random.randn(5), index=['a', 'b', 'c', 'd', 'e'])
print(s)
###Output
a -0.545619
b -0.961111
c -0.182544
d -1.258336
e 0.309230
dtype: float64
###Markdown
We can also create a Series from a dictionary.
###Code
libs_dict = {'Library1': 'Numpy',
'Library2': 'Pandas',
'Library3': 'Matplotlib'}
s = pd.Series(libs_dict)
print(s)
print(type(s))
s['Library2']
###Output
_____no_output_____
###Markdown
As you can see, when creating a Pandas Series, only the data (first argument) is mandatory here. The rest are optional; you can opt not to input the name or index. Editing SeriesEditing a Series is very similar to how you do it with `dict`s.
###Code
libs_dict = {'Library1': 'Numpy', 'Library2': 'Pandas', 'Library3': 'Matplotlib'}
s = pd.Series(libs_dict)
print(s)
s['Library2'] = 'Pandas 2.0'
s['Library4'] = 'GeoPandas'
print(s)
s.pop('Library4')
print(s)
###Output
Library1 Numpy
Library2 Pandas
Library3 Matplotlib
dtype: object
Library1 Numpy
Library2 Pandas 2.0
Library3 Matplotlib
Library4 GeoPandas
dtype: object
Library1 Numpy
Library2 Pandas 2.0
Library3 Matplotlib
dtype: object
###Markdown
DataframeDataframe is like spreadsheet or a SQL table. It is basically a 2-dimensional labelled data structure with columns of potentially different data types. To put it simply, `DataFrame` is a multi-column `Series` object. It is generally the most commonly used pandas object, and like `Series`, `DataFrame` accepts many different kinds of input: - Dict of 1D ndarrays, lists, dicts, or Series - 2-D numpy.ndarray - Structured or record ndarray - A `Series` - Another `DataFrame` Here are some examples of creating Dataframes using different data types and structures as inputs. Creating Dataframe using dict
###Code
data = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(data)
print('Dataframe:\n', df)
print('Type of Object:', type(df))
print('Type of elements:', type(df.values))
###Output
Dataframe:
one two
a 1.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
Type of Object: <class 'pandas.core.frame.DataFrame'>
Type of elements: <class 'numpy.ndarray'>
###Markdown
Just like Series, you can access the following attributes: index, values and columns.
###Code
print('Index: ', df.index)
print('Columns: ', df.columns)
print('Values of Column one: ', df['one'].values)
print('Values of Column two: ', df['two'].values)
###Output
Index: Index(['a', 'b', 'c', 'd'], dtype='object')
Columns: Index(['one', 'two'], dtype='object')
Values of Column one: [ 1. 2. 3. nan]
Values of Column two: [1. 2. 3. 4.]
###Markdown
Creating Dataframe using list of dict
###Code
df2 = pd.DataFrame([{'a': 1, 'b': 2, 'c':3, 'd':None},
{'a': 2, 'b': 2, 'c': 3, 'd': 4}],
index=['one', 'two'])
print('Dataframe: \n',df2)
# Ofcourse you can also transpose the result:
print('Transposed Dataframe: \n',df2.T)
###Output
Dataframe:
a b c d
one 1 2 3 NaN
two 2 2 3 4.0
Transposed Dataframe:
one two
a 1.0 2.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
###Markdown
Editing DataFrameAssigning a column that doesn’t exist will create a new column. If it exists, the assigned value will override the old one.
###Code
df = pd.DataFrame(data)
df['three'] = None
print('Added third column: \n',df)
# The del keyword can be used delete columns:
del df['three']
print('\nDeleted third column: \n',df)
# You can also use df.drop(). We shall see that later
df.loc['a','one'] = 9000
print('\nEdited a value: \n',df)
###Output
Added third column:
one two three
a 1.0 1.0 None
b 2.0 2.0 None
c 3.0 3.0 None
d NaN 4.0 None
Deleted third column:
one two
a 1.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
Edited a value:
one two
a 9000.0 1.0
b 2.0 2.0
c 3.0 3.0
d NaN 4.0
###Markdown
**Using `.drop`**
###Code
data = {'one' : pd.Series([1., 2., 3.], index=['a', 'b', 'c']),
'two' : pd.Series([1., 2., 3., 4.], index=['a', 'b', 'c', 'd'])}
df = pd.DataFrame(data)
df
df.drop(['c', 'a'])
###Output
_____no_output_____
###Markdown
IndexingThere are many ways to select and rearrange the data contained in a pandas object. Some indexing options can be seen in below table:|Indexing Type| Description||:---|:---||df[val] | Select single column or sequence of columns from the DataFrame. Special case con- veniences: boolean array (filter rows), slice (slice rows), or boolean DataFrame (set values based on some criterion).||df.ix[val] | Selects single row of subset of rows from the DataFrame.||df.ix[:, val] | Selects single column of subset of columns.||df.ix[val1, val2] | Select both rows and columns.||reindex method | Conform one or more axes to new indexes.||xs method | Select single row or column as a Series by label.||icol, irowmethods | Select single column or row, respectively, as a Series by integer location.||get_value, set_value methods | Select single value by row and column label.| Series indexing works similarly to a dict--we provide the key
###Code
s
s['Library2']
###Output
_____no_output_____
###Markdown
As for DataFrame, To slice and select only column one for rows 0 and 4 use the following.
###Code
df
# Slicing and selecting only column `one` for row 0 and row 4
df['one'][['a', 'd']]
# Slicing df from row b to row 4 for column `one`
df['one']['b':'d']
###Output
_____no_output_____
###Markdown
In the above cell, you will notice that slicing with labels behaves differently than normal Python slicing in that the endpoint is inclusive.For DataFrame label-indexing on the rows, there is a special indexing field loc, which enables us to select a subset of the rows and columns from a `DataFrame` with `numpy`-like notation plus axis labels. It is a less verbose way to do the reindexing, but we typically use this.
###Code
df.loc[['a','c'],['one']]
df.loc[['a','c'],['one', 'two']]
###Output
_____no_output_____
###Markdown
FilteringWe can also filter by having a condition inside `loc`.
###Code
df
df.loc[df.one > 1]
df.loc[df.two >= 3]
###Output
_____no_output_____
###Markdown
SortingWe can sort items by doing the following.
###Code
dt = pd.Series(np.random.randint(3, 10, size=7),
index=['g','c','a','b','e','d','f'])
print('Original Data: \n', dt, end="\n\n")
print('Sorted by Index: \n',dt.sort_index())
###Output
Original Data:
g 6
c 6
a 9
b 4
e 9
d 6
f 5
dtype: int32
Sorted by Index:
a 9
b 4
c 6
d 6
e 9
f 5
g 6
dtype: int32
###Markdown
Using Numpy functions on DataFrameElement-wise numpy functions like log, exp, sqrt, ... and various other numpy functions can be used on DataFrame.
###Code
np.random.seed(42) # ensures that we are getting consistent values
df1 = pd.DataFrame(np.random.randn(10, 4), columns=['A', 'B', 'C', 'D'])
df1
np.abs(df1)
np.log(df1)
###Output
C:\Users\Jude Michael Teves\AppData\Roaming\Python\Python37\site-packages\ipykernel_launcher.py:1: RuntimeWarning: invalid value encountered in log
"""Entry point for launching an IPython kernel.
###Markdown
Reading / Loading Data
###Code
df = pd.read_csv('https://raw.githubusercontent.com/Cyntwikip/data-repository/main/titanic.csv')
# df = pd.read_csv('C:/Github/data-repository/titanic.csv') # to read local file instead
df.head()
###Output
_____no_output_____
###Markdown
Here are some datasets I have made available in my public data repository on Github:- Titanic: https://raw.githubusercontent.com/Cyntwikip/data-repository/main/titanic.csv - Illness Toy Dataset: https://raw.githubusercontent.com/Cyntwikip/data-repository/main/illness.csv Of course, you may use any dataset you wish to, whether it be offline (local) or online. We just used a dataset that is available online so that you can run this as is without the need to download a file. MatplotlibMatplotlib is the most used Python package for 2d graphics. It is simple to use and has almost all standard graphs/plots in it. This is a very big topic so we will just focus on the practical aspect of it by showing some code snippets for doing basic visualizations for commonly used data. Let's check the styles that we could use.
###Code
plt.style.available
###Output
_____no_output_____
###Markdown
Let's use the `seaborn-darkgrid` style.
###Code
plt.style.use('seaborn-darkgrid')
###Output
_____no_output_____
###Markdown
Line PlotFor sequential data like time, we can use a line plot. The following data points that we will be plotting are random generated, but for this example, just treat the axis as a temporal feature.
###Code
np.random.seed(42)
x = np.arange(10)
y = np.random.randint(50, 100, size=10)
plt.figure(figsize=(6, 4), dpi=100)
plt.ylabel('value')
plt.xlabel('time')
plt.title('Line Plot')
plt.plot(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
Scatter plotWe can also do a scatter plot for the data above.
###Code
np.random.seed(42)
x = np.arange(10)
y = np.random.randint(50, 100, size=10)
plt.figure(figsize=(6, 4), dpi=100)
plt.ylabel('value')
plt.xlabel('time')
plt.title('Scatter Plot')
plt.scatter(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
What if we have multiple categories in our data? In the following example, there are 3 categories.
###Code
from sklearn import datasets
# import some data to play with
iris = datasets.load_iris()
X = iris.data[:, :2] # we only take the first two features.
y = iris.target
plt.figure(figsize=(6, 4), dpi=100)
plt.ylabel('sepal width')
plt.xlabel('sepal length')
plt.title('Scatter Plot')
plt.scatter(X[:,0], X[:,1], c=y, cmap=plt.cm.Set1)
plt.show()
###Output
_____no_output_____
###Markdown
Bar graphBar graph is used for categorical data.
###Code
x = ['Mei', 'Zhongli', 'Venti']
y = [5, 6, 4]
plt.figure(figsize=(6, 4), dpi=100)
plt.ylabel('height')
plt.xlabel('person')
plt.title('Height')
plt.bar(x, y)
plt.show()
###Output
_____no_output_____
###Markdown
HistogramHistograms are typically used for showing the distribution of data.
###Code
dist = np.random.normal(size=100)
plt.figure(figsize=(6, 4), dpi=100)
plt.hist(dist)
plt.show()
###Output
_____no_output_____ |
notebooks/People/People Making Choices.ipynb | ###Markdown
How People Decide what they want to do
Directed graph approach
Generally people want to do a number of different things. For this I'm going to create a schema for this in a graph language that allows me to designate how much a `pop` desires to take a certain action. This will be used later when determining AI decisions.
**Note** this notebook actualy builds the desires into the graph, overwriting existing ontology.
###Code
import sys
import numpy as np
import pandas as pd
import altair as alt
sys.path.append('..')
import helpers.dbquery as db
import helpers.functions as f
import yaml, ssl, asyncio
import nb_black
ssl._create_default_https_context = ssl._create_unverified_context
asyncio.set_event_loop_policy(asyncio.WindowsSelectorEventLoopPolicy())
import nest_asyncio
# this is required for running in a Jupyter Notebook.
nest_asyncio.apply()
res = db.run_query("g.V().hasLabel('pop').has('username','userbill').valueMap()")
pops = [db.clean_node(n) for n in res]
pops[0]
###Output
_____no_output_____
###Markdown
Each population wants to do everything to a degree, the amount of desire to do that thing is expressed by the edge weight.
* Attack a population
* Focus on improving literacy
* Focus on improving industry Desires as Objects
Desire with targets.
Both factions and pops can have desire. Action is guided by desire based on the `max(desire.weight)`.
`desire` is an edge, the type of that desire is a property of that edge, and the edge weight is the amount of desire. The target (`node2`) is the recipient.
Examples:
* faction wants trade with faction
* pop wants war with another pop
* pop wants faction to go to war with faction
Desire without targets.
Desires without targets must link to an objective. That objective can be it's own node.
###Code
# # Drop the items, if they exist.
# db.run_query("g.V().hasLabel('objective').has('username','notebook').drop()")
# objectives_yaml = yaml.safe_load(open("desires.yaml"))['objectives']
# data = {"nodes":objectives_yaml,'edges':[]}
# # Then Create the nodes and add them to the DB
# db.upload_data(data,verbose=False)
# After creating the nodes, pulling them into the notebook for reference
res = db.run_query("g.V().hasLabel('objective').valueMap()")
objectives = [db.clean_node(n) for n in res]
# objectives
###Output
_____no_output_____
###Markdown
population wants to improve industry
populations want to improve industry when:
* they are not wealty
* they are at war
###Code
# Marginal return on base attribute
n = 2
ind_df = pd.DataFrame(np.sort([float(p['wealth']) for p in pops]),columns=['wealth'])
ind_df['base'] = range(len(ind_df))
ind_df['desires_industry'] = ind_df['wealth'].apply(lambda x: ((x+1)**(1-n) - 1)/(1-n))
ind_df['desire_base'] = ind_df['base'].apply(lambda x: ((x+1)**(1-n) - 1)/(1-n))
alt.Chart(ind_df).mark_line().encode(x='base',y='desire_base').properties(title="Desire relative to the base attribute")
alt.Chart(ind_df).mark_line().encode(x='wealth:N',y='desires_industry').properties(title="Desires wealth industry relative to industry")
###Output
_____no_output_____
###Markdown
feeding that desire to the populations
###Code
def get_desire(x):
return np.round(((float(x)+1)**(1-n) - 1)/(1-n),3)
edges = []
for p in pops:
for o in objectives:
edge = {'label':'desires',
'node1':p['objid'],
'node2':o['objid'],
'weight':get_desire(p[o['leadingAttribute']])}
edges.append(edge)
pd.DataFrame(edges)
db.create_edge(edges[0])
db.upload_data({'nodes':[],'edges':edges},verbose=False)
[p for p in pops if p['objtype']=='pop']
###Output
_____no_output_____ |
src/Equation_of_State_T_eq_0.ipynb | ###Markdown
Analysis of the Equation of State
###Code
import numpy as np
import matplotlib.pyplot as plt
from graphenetools import gt
import re,glob,os
from scipy.signal import argrelextrema
from scipy.optimize import brentq
import multiprocessing
import sys,importlib
from dgutils import colors as colortools
from collections import defaultdict
import pickle
from numpy import pi as π
# Notebook display options
%matplotlib inline
%config InlineBackend.figure_format = 'svg'
# plot style
plot_style = {'notebook':'../include/notebook.mplstyle','aps':'../include/aps.mplstyle'}
plt.style.reload_library()
plt.style.use(plot_style['aps'])
figsize = plt.rcParams['figure.figsize']
plt.rcParams['text.latex.preamble'] = f'\input{{{os.getcwd()}/../include/texheader}}'
colors = plt.rcParams['axes.prop_cycle'].by_key()['color']
###Output
_____no_output_____
###Markdown
Do you want to process the full data set?Default is False. The full data set can be found here: [![DOI](https://zenodo.org/badge/DOI/10.5281/zenodo.4553524.svg)](https://doi.org/10.5281/zenodo.4553524)A minimal set of reduced (averaged and processed) data files is included with the repository `../data/QMC.tar.bz2`. We can extract if it hasn't already happened.
###Code
reduce_data = False
if not os.path.isdir('../data/QMC/'):
! tar xjf ../data/QMC.tar.bz2
###Output
_____no_output_____
###Markdown
Some helper and analysis functions
###Code
import layerutils
from layerutils import lab,vals,texformat,get_base_dir
from pimcscripts import pimchelp
###Output
_____no_output_____
###Markdown
Load QMC Data from Disk
###Code
num_sites = [24]
sim_params = {'T':0.0,'canonical':True,'τ':0.00313, 'β':0.5007}
Lz = np.array([5.05,10.0])
pimcid = defaultdict(dict)
par_map = defaultdict(dict)
base_dir = defaultdict(dict)
L,n,N,τ = defaultdict(dict),defaultdict(dict),defaultdict(dict),defaultdict(dict)
N_ads = defaultdict(dict)
simulations,pimcids = {},{}
pigs_pimcids,pimc_pimcids = defaultdict(list),defaultdict(list)
for cnum in num_sites:
Nkey = lab(N=cnum)
cbase_dir = get_base_dir(cnum,T=sim_params['T'])
log_names = pimchelp.get_file_list_from_params(**sim_params,base_dir=cbase_dir)
# We go through each file and automatically populate the simulation map
for log in log_names:
par_ = pimchelp.get_parameter_map(cbase_dir + log)
cN = par_['Initial Number Particles']
cf = cN/cnum
sim = lab(T=sim_params['T'],n=cf,Lz=par_['Container Length'],N=cnum)
base_dir[Nkey][sim] = cbase_dir
# sort the pimcids into two possible groups
pimcid[Nkey][sim] = par_['PIMCID']
if sim_params['T'] > 0:
pimc_pimcids[Nkey].append(par_['PIMCID'])
else:
pigs_pimcids[Nkey].append(par_['PIMCID'])
par_map[Nkey][sim] = par_
# We add some short-hand variables for ease of referencing
L[Nkey][sim] = par_map[Nkey][sim]['Container Dimensions']
n[Nkey][sim] = par_map[Nkey][sim]['Initial Density']
N[Nkey][sim] = par_map[Nkey][sim]['Initial Number Particles']
τ[Nkey][sim] = par_map[Nkey][sim]['Specified Imaginary Time Step']
simulations[Nkey] = list(pimcid[Nkey].keys())
pimcids[Nkey] = list(pimcid[Nkey].values())
###Output
_____no_output_____
###Markdown
Generate the graphene lattice
###Code
sim = simulations[lab(N=24)][0]
fix,ax = gt.plot_graphene_lattice_with_c_one_third(0.0,L[lab(N=24)][sim][:-1])
###Output
_____no_output_____
###Markdown
Reduce All Data Files
###Code
if reduce_data:
for cnum in num_sites[:]:
print(f'=== N = {cnum} ===\n')
reduce_command = f"parallel reduce-one.py -r T -i {{}} -s 0.8 --canonical {get_base_dir(cnum,T=sim_params['T'])} ::: {' '.join(pimcids[lab(N=cnum)])}"
stream = os.popen(reduce_command)
output = stream.read()
print(output)
###Output
_____no_output_____
###Markdown
Load the reduced estimators
###Code
estimator = {}
ρlin = {}
for cnum in num_sites:
cNkey = lab(N=cnum)
for sim in simulations[cNkey]:
ckey = lab(N=cnum,Lz=vals(sim)['Lz'])
reduce_params = {'canonical':True,'reduce':'T', 'pimcid':pimcid[cNkey][sim],'base_dir':base_dir[cNkey][sim]}
estimator[sim] = pimchelp.PIMCResults(pimchelp.get_reduce_name(**reduce_params,estimator='estimator'))
ρlin[sim] = pimchelp.PIMCResults(pimchelp.get_reduce_name(**reduce_params,estimator='lineardensity'))
filling = {}
est = {}
for cnum in num_sites:
filling[lab(N=cnum)] = np.array([cn/cnum for cn in range(1,25)])
for cLz in Lz:
est[lab(N=cnum,Lz=cLz)] = defaultdict(list)
for cf in filling[lab(N=cnum)]:
sim = lab(N=cnum,n=cf,T=0.0,Lz=cLz)
for cest_name in estimator[sim].data.dtype.names:
est[lab(N=cnum,Lz=cLz)][cest_name].append(estimator[sim].data[cest_name])
for cest_name in estimator[sim].headers:
est[lab(N=cnum,Lz=cLz)][cest_name] = np.array(est[lab(N=cnum,Lz=cLz)][cest_name])
###Output
_____no_output_____
###Markdown
The Equation of State
###Code
from mpl_toolkits.axes_grid1.inset_locator import inset_axes
from fractions import Fraction
shift = {}
for cnum in num_sites:
for cLz in Lz:
key = lab(N=cnum,Lz=cLz)
shift[key]= est[key]['E/N'][0]
fig,ax = plt.subplots(figsize=(figsize[0],figsize[1]), constrained_layout=True)
axins1 = inset_axes(ax, width="80%", height="70%",
bbox_to_anchor=(.045, .2, .36, .78),
bbox_transform=ax.transAxes)
axins2 = inset_axes(ax, width="80%", height="70%",
bbox_to_anchor=(.355, .2, .36, .78),
bbox_transform=ax.transAxes)
axins = [axins1,axins2]
axins[0].set_xlabel(r'$\alabel{z}{\angstrom}$')
axins[1].set_xlabel(r'$\alabel{z}{\angstrom}$')
axins[0].set_ylabel(r'$\alabel{\rho(z)/N}{\angstrom^{-1}}$')
params = {'mfc':'None', 'elinewidth':0.5, 'marker':'o', 'ms':5, 'lw':0.5, 'ls':'--','mew':0.75}
for cnum in num_sites:
for j,cLz in enumerate(Lz):
for i,cf in enumerate([1/3,1]):
sim = lab(N=cnum,T=0,n=cf,Lz=cLz)
x,y,Δy = ρlin[sim].epdata(ρlin[sim].params[0])
axins[i].plot(x+0.5*L[lab(N=cnum)][sim][-1],y/N[lab(N=cnum)][sim], lw=0.75, color=colors[j])
axins[i].annotate(f'$f = {Fraction(cf).limit_denominator()}$', xy=(0.95,0.85),xytext=(0.95,0.85),
xycoords='axes fraction', ha='right', va='bottom')
axins[1].set_yticklabels([])
axins[0].set_yticklabels([])
for i in range(2):
axins[i].set_xlim(0,8)
for i,cLz in enumerate(Lz):
ax.errorbar(filling[lab(N=24)],est[lab(N=24,Lz=cLz)]['E/N']-shift[lab(N=24,Lz=cLz)],
yerr=est[lab(N=24,Lz=cLz)]['ΔE/N'],**params,
label = f'$L_z = {cLz:.2f}\; \mathrm{{\AA}}$', color=colortools.get_alpha_hex(colors[i],0.5),
mec=colors[i])
#ax.legend(loc=(0.6,0.4),ncol=1)
ax.annotate("",
xy=(1/3, 0.0), xycoords='data', zorder=-100,
xytext=(0.4, 100), textcoords='data',
arrowprops=dict(arrowstyle="-",
connectionstyle="arc3", color='gray', ls=':',alpha=0.5,lw=0.5),
)
ax.annotate("",
xy=(1/3, 0.0), xycoords='data',zorder=-100,
xytext=(0.09, 98), textcoords='data',
arrowprops=dict(arrowstyle="-",
connectionstyle="arc3", color='gray', ls=':', alpha=0.5,lw=0.5),
)
data_val = est[lab(N=24,Lz=10)]['E/N'][-1]-shift[lab(N=24,Lz=10)]
ax.annotate("",
xy=(1, data_val), xycoords='data',zorder=-100,
xytext=(0.72, 98), textcoords='data',
arrowprops=dict(arrowstyle="-",
connectionstyle="arc3", color='gray', ls=':', alpha=0.5,lw=0.5),
)
data_val = est[lab(N=24,Lz=5.05)]['E/N'][-1]-shift[lab(N=24,Lz=5.05)]
ax.annotate("",
xy=(1, data_val), xycoords='data',zorder=-100,
xytext=(0.72, 246), textcoords='data',
arrowprops=dict(arrowstyle="-",
connectionstyle="arc3", color='gray', ls=':', alpha=0.5,lw=0.5),
)
cnum=24
loc=(0.19,0.1)
ax.annotate(f'$N_\graphene = {cnum}$', xy=loc,xytext=loc,
xycoords='axes fraction', ha='right', va='bottom')
loc=(0.98,0.1)
ax.annotate(r'$L_z = \SI{10}{\angstrom}$', xy=loc,xytext=loc,
xycoords='axes fraction', ha='right', va='bottom')
loc=(0.98,0.8)
ax.annotate(r'$L_z = \SI{5.05}{\angstrom}$', xy=loc,xytext=loc,
xycoords='axes fraction', ha='right', va='bottom')
ax.set_xlabel('Filling Fraction $f = N/N_\graphene$')
ax.set_ylabel(r'$\alabel{E/N-E_1}{\kelvin}$');
plt.savefig('../plots/EoS_Teq0.pdf',dpi=300)
plt.savefig('../plots/EoS_Teq0.svg',dpi=300)
###Output
_____no_output_____
###Markdown
How big is the raw offset between the curves?
###Code
Δ = shift[lab(N=24,Lz=10)] - shift[lab(N=24,Lz=5.05)]
print(f'Δ = {Δ:.2f} K')
print(f'Relative Shift = {100*Δ/est[lab(N=24,Lz=cLz)]["E/N"][0]:.1f}%')
###Output
Δ = -5.67 K
Relative Shift = 4.6%
|
hopfiled/hopfileNeuralNetwork.ipynb | ###Markdown
hopfiled神经网络及其实现 小组成员:沈旭阳、谭力仁、温紫珺、邹子涵 汇报成员:沈旭阳 实验介绍 实验类别:hopfile神经网络;离散型;异步更新 Hopfield神经网络是一种非常典型的反馈型神经网络,除了与前馈神经系统相同的神经元之间的前馈连接,很明显还存在一种反馈连接。 Hopfield网络结构可以用以下示意图描述: ![image.png](attachment:88eaada9-2029-4b47-82f8-a167e7569f94.png) 从示意图中可知,该神经网络结构具有以下三个特点: 1、神经元之间全连接,并且为单层神经网络。 2、每个神经元既是输入又是输出,导致得到的权重矩阵相对称,故可节约计算量。 3、在输入的激励下,其输出会产生不断的状态变化,这个反馈过程会一直反复进行。假如Hopfield神经网络是一个收敛的稳定网络,则这个反馈与迭代的计算过程所产生的变化越来越小,一旦达到了稳定的平衡状态,Hopfield网络就会输出一个稳定的恒值。 4、Hopfield网络可以储存一组平衡点,使得当给定网络一组初始状态时,网络通过自行运行而最终收敛于这个设计的平衡点上。当然,根据热力学上,平衡状态分为stable state和metastable state, 这两种状态在网络的收敛过程中都是非常可能的。 5、为递归型网络,t时刻的状态与t-1时刻的输出状态有关。之后的神经元更新过程也采用的是异步更新法(Asynchronous)。 python实现 导包
###Code
import numpy as np
import random
from PIL import Image
import os
import re
import matplotlib.pyplot as plt
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
###Output
_____no_output_____
###Markdown
将图片转换为二值矩阵 函数参数为(file、size、threshold) 分别代表(图片文件、图片大小、2值化阈值)
###Code
def readImg2array(file,size, threshold= 145):
pilIN = Image.open(file).convert(mode="L")
pilIN= pilIN.resize(size)
imgArray = np.asarray(pilIN,dtype=np.uint8)
x = np.zeros(imgArray.shape,dtype=np.float)
x[imgArray > threshold] = 1
x[x==0] = -1
return x
###Output
_____no_output_____
###Markdown
二值矩阵转化为图片 函数参数为(矩阵、输出文件) 输出文件默认为空
###Code
def array2img(data, outFile = None):
y = np.zeros(data.shape,dtype=np.uint8)
y[data==1] = 255
y[data==-1] = 0
img = Image.fromarray(y,mode="L")
if outFile is not None:
img.save(outFile)
return img
###Output
_____no_output_____
###Markdown
将矩阵转换为向量形式
###Code
def mat2vec(x):
m = x.shape[0]*x.shape[1]
tmp1 = np.zeros(m)
c = 0
for i in range(x.shape[0]):
for j in range(x.shape[1]):
tmp1[c] = x[i,j]
c +=1
return tmp1
###Output
_____no_output_____
###Markdown
创建Hij即权重矩阵,依据hopfile特性,该矩阵为对称矩阵
###Code
def create_W_single_pattern(x):
if len(x.shape) != 1:
print ("该输入不是一个向量!")
return
else:
w = np.zeros([len(x),len(x)])
for i in range(len(x)):
for j in range(i,len(x)):
if i == j:
w[i,j] = 0
else:
w[i,j] = x[i]*x[j]
# 对称矩阵性质
w[j,i] = w[i,j]
return w
###Output
_____no_output_____
###Markdown
建立hopfiled升级函数对神经元随机升级,采用异步更新,获取更新后的神经元向量以及系统能量。
###Code
def update_asynch(weight,vector,theta=0.5,times=100):
# 初始化参数
energy_ = []
times_ = []
# 记录系统能量更新过程
energy_.append(energy(weight,vector))
# 记录迭代次数
times_.append(0)
# 遍历迭代次数
for i in range(times):
# 获取随机数,对随机神经元进行更新
length = len(vector)
update_num = random.randint(0,length-1)
# 对神经元更新
next_time_value = np.dot(weight[update_num][:],vector) - theta
# sign激活函数,对更新值取符号
if next_time_value>=0:
vector[update_num] = 1
if next_time_value<0:
vector[update_num] = -1
# 记录迭代次数和系统能量变化
times_.append(i)
energy_.append(energy(weight,vector))
return (vector,times_,energy_)
###Output
_____no_output_____
###Markdown
计算系统能量
###Code
def energy(weight,x,bias=0):
energy = -x.dot(weight).dot(x.T)+sum(bias*x)
return energy
###Output
_____no_output_____
###Markdown
hopfiled主体实现
###Code
size_global =(80,80)
threshold_global = 220
train_paths = []
train_path = "./training/"
for i in os.listdir(train_path):
if re.match(r'[0-9 a-z A-Z-_]*.jp[e]*g',i):
train_paths.append(train_path+i)
flag = 0
for path in train_paths:
matrix_train = readImg2array(path,size = size_global,threshold=threshold_global)
vector_train = mat2vec(matrix_train)
plt.imshow(array2img(matrix_train))
plt.title("training picture")
plt.show()
if flag == 0:
w_ = create_W_single_pattern(vector_train)
flag = flag +1
else:
w_ = w_ +create_W_single_pattern(vector_train)
flag = flag +1
# 建立权值矩阵
w_ = w_/flag
print(w_.shape)
print(w_)
test_paths = []
test_path = "./test/"
for i in os.listdir(test_path):
if re.match(r'[0-9 a-z A-Z-_]*.jp[e]*g',i):
test_paths.append(test_path+i)
num = 0
for path in test_paths:
num = num+1
matrix_test = readImg2array(path,size = size_global,threshold=threshold_global)
vector_test = mat2vec(matrix_test)
plt.subplot(221)
plt.imshow(array2img(matrix_test))
plt.title("test picture")
oshape = matrix_test.shape
aa = update_asynch(weight=w_,vector=vector_test,theta = 0.5 ,times=10000)
vector_test_update = aa[0]
matrix_test_update = vector_test_update.reshape(oshape)
plt.subplot(222)
plt.imshow(array2img(matrix_test_update))
plt.title("recall"+str(num))
#plt.show()
plt.subplot(212)
plt.plot(aa[1],aa[2])
plt.ylabel("energy")
plt.xlabel("update times")
plt.show()
###Output
_____no_output_____ |
notebooks/04.Widget-libraries/04.02-ipympl.ipynb | ###Markdown
ipympl: The Matplotlib Jupyter Widget Backend https://github.com/matplotlib/ipymplEnabling interaction with matplotlib charts in the Jupyter notebook and JupyterLab- BSD-3-Clause**Installation:**```bashconda install -c conda-forge ipympl``` Enabling the `widget` backend. This requires ipympl. ipympl can be install via pip or conda.
###Code
%matplotlib widget
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import VBox, FloatSlider, IntSlider, Button
###Output
_____no_output_____
###Markdown
When using the `widget` backend from ipympl, fig.canvas is a proper Jupyter interactive widget, which can be embedded in Layout classes like HBox and Vbox.One can bound figure attributes to other widget values.
###Code
# Creating a new figure (1)
fig = plt.figure()
# Simple plot
x = np.linspace(0,5,11)
y = x ** 3
plt.plot(x,y, '-m');
###Output
_____no_output_____
###Markdown
Change the window title
###Code
fig.canvas.set_window_title('My interactive widget-enabled plot')
###Output
_____no_output_____
###Markdown
Remove toolbar, header and footer from the plot window
###Code
fig.canvas.toolbar_visible = False
fig.canvas.header_visible = False
fig.canvas.footer_visible = False
###Output
_____no_output_____
###Markdown
Disable canvas resizing
###Code
fig.canvas.resizable = False
###Output
_____no_output_____
###Markdown
Adding widget controls to our figure
###Code
# Disabling internal matplotlib intaractive mode off (we use our own backend)
plt.ioff()
# Creating a simple slider widget
slider = FloatSlider(
value=1.0,
min=0.02,
max=2.0
)
# New figure object
fig = plt.figure()
plt.title('Plotting: y=sin({} * x)'.format(slider.value))
# 500 even-spaced data points on the x-axis between 0 and 20.
x1 = np.linspace(0, 20, 500)
# Applying and plotting the sin function for each data point
lines = plt.plot(x1, np.sin(slider.value * x1))
# Callback function when our slider changes in value
def update_lines(change):
lines[0].set_data(x1, np.sin(change.new * x1))
fig.canvas.draw()
fig.canvas.flush_events()
plt.title('Plotting: y=sin({} * x)'.format(slider.value))
# Setting up an event listener for the slider value
slider.observe(update_lines, names='value')
# Render the slider and figure in a vertical box
VBox([slider, fig.canvas])
###Output
_____no_output_____
###Markdown
3D plots
###Code
from mpl_toolkits.mplot3d import axes3d
# Setting up a new, blank figure object
fig = plt.figure()
# Adding an axes to the figure
ax = fig.add_subplot(111, projection='3d')
# Grab some test data.
X, Y, Z = axes3d.get_test_data(0.05)
# Plot a basic wireframe.
ax.plot_surface(X, Y, Z, rstride=10, cstride=10)
# Display the plot
fig.canvas
###Output
_____no_output_____
###Markdown
Subplots
###Code
# Static sample data
np.random.seed(0)
# Number of bins for the histogram
n_bins = 10
x2 = np.random.randn(1000, 3)
# a two-by-two plot grid (4 plots)
fig3, axes = plt.subplots(nrows=2, ncols=2)
ax0, ax1, ax2, ax3 = axes.flatten()
# Setting up the colors and generating the top-left histogram
colors = ['red', 'tan', 'lime']
ax0.hist(x2, n_bins, density=1, histtype='bar', color=colors, label=colors)
ax0.legend(prop={'size': 10})
ax0.set_title('bars with legend')
# Setting up the stacked bar
ax1.hist(x2, n_bins, density=1, histtype='bar', stacked=True)
ax1.set_title('stacked bar')
# Setting up the bottom-left histogram
ax2.hist(x2, n_bins, histtype='step', stacked=True, fill=False)
ax2.set_title('stack step (unfilled)')
# Make a multiple-histogram of data-sets with different length (bottom-right)
x_multi = [np.random.randn(n) for n in [10000, 5000, 2000]]
ax3.hist(x_multi, n_bins, histtype='bar')
ax3.set_title('different sample sizes')
# Display the plot
fig3.tight_layout()
fig3.canvas
fig3.canvas.toolbar_position = 'right'
fig3.canvas.toolbar_visible = False
###Output
_____no_output_____
###Markdown
Exercise **This is a slightly challenging exercise!**Create a small app which generates and displays a simulation of a stock price (you can use the helper function) and has the following widgets:1. An interactive ipympl canvas with the toolbar on the left hand side2. A slider which selects the number of steps per simulation3. A button to ghenerate new data and update the plotThe plot should update whenever there is a change to the slider value, or the button is clicked.
###Code
# Helper function
def generate_timeseries(steps):
return (np.arange(1,steps + 1,1), 100 + np.random.normal(0,1,steps).cumsum())
# Starting data
x_data, y_data = generate_timeseries(100)
# Code goes here
fig = plt.figure()
lines = plt.plot(x_data, y_data)
s1 = IntSlider(description='Steps', min=50, value=100, max=150)
b = Button(description='Generate Data')
def update_plot(change=None):
x_data, y_data = generate_timeseries(s1.value)
lines[0].set_data(x_data, y_data)
lines[0].axes.set_ylim(min(y_data) - 1, max(y_data) + 1)
lines[0].axes.set_xlim(min(x_data) - 1, max(x_data) + 1)
fig.canvas.draw()
fig.canvas.flush_events()
# Setting up an event listener for the slider value
s1.observe(update_plot, 'value')
# Set up event listener for the button
b.on_click(update_plot)
# Rendering slider and figure and a vertical box
VBox([s1, b, fig.canvas])
###Output
_____no_output_____
###Markdown
ipympl: The Matplotlib Jupyter Widget Backend https://github.com/matplotlib/ipymplEnabling interaction with matplotlib charts in the Jupyter notebook and JupyterLab- BSD-3-Clause**Installation:**```bashconda install -c conda-forge ipympl``` Enabling the `widget` backend. This requires ipympl. ipympl can be install via pip or conda.
###Code
%matplotlib widget
import numpy as np
import matplotlib.pyplot as plt
from ipywidgets import VBox, FloatSlider, IntSlider, Button
###Output
_____no_output_____
###Markdown
When using the `widget` backend from ipympl, fig.canvas is a proper Jupyter interactive widget, which can be embedded in Layout classes like HBox and Vbox.One can bound figure attributes to other widget values.
###Code
# Creating a new figure (1)
fig = plt.figure()
# Simple plot
x = np.linspace(0,5,11)
y = x ** 3
plt.plot(x,y, '-m');
###Output
_____no_output_____
###Markdown
Change the window title
###Code
fig.canvas.set_window_title('My interactive widget-enabled plot')
###Output
_____no_output_____
###Markdown
Remove toolbar, header and footer from the plot window
###Code
fig.canvas.toolbar_visible = False
fig.canvas.header_visible = False
fig.canvas.footer_visible = False
###Output
_____no_output_____
###Markdown
Disable canvas resizing
###Code
fig.canvas.resizable = False
###Output
_____no_output_____
###Markdown
Adding widget controls to our figure
###Code
# Disabling internal matplotlib intaractive mode off (we use our own backend)
plt.ioff()
# Creating a simple slider widget
slider = FloatSlider(
value=1.0,
min=0.02,
max=2.0
)
# New figure object
fig = plt.figure()
plt.title('Plotting: y=sin({} * x)'.format(slider.value))
# 500 even-spaced data points on the x-axis between 0 and 20.
x1 = np.linspace(0, 20, 500)
# Applying and plotting the sin function for each data point
lines = plt.plot(x1, np.sin(slider.value * x1))
# Callback function when our slider changes in value
def update_lines(change):
lines[0].set_data(x1, np.sin(change.new * x1))
fig.canvas.draw()
fig.canvas.flush_events()
plt.title('Plotting: y=sin({} * x)'.format(slider.value))
# Setting up an event listener for the slider value
slider.observe(update_lines, names='value')
# Render the slider and figure in a vertical box
VBox([slider, fig.canvas])
###Output
_____no_output_____
###Markdown
3D plots
###Code
from mpl_toolkits.mplot3d import axes3d
# Setting up a new, blank figure object
fig = plt.figure()
# Adding an axes to the figure
ax = fig.add_subplot(111, projection='3d')
# Grab some test data.
X, Y, Z = axes3d.get_test_data(0.05)
# Plot a basic wireframe.
ax.plot_surface(X, Y, Z, rstride=10, cstride=10)
# Display the plot
fig.canvas
###Output
_____no_output_____
###Markdown
Subplots
###Code
# Static sample data
np.random.seed(0)
# Number of bins for the histogram
n_bins = 10
x2 = np.random.randn(1000, 3)
# a two-by-two plot grid (4 plots)
fig3, axes = plt.subplots(nrows=2, ncols=2)
ax0, ax1, ax2, ax3 = axes.flatten()
# Setting up the colors and generating the top-left histogram
colors = ['red', 'tan', 'lime']
ax0.hist(x2, n_bins, density=1, histtype='bar', color=colors, label=colors)
ax0.legend(prop={'size': 10})
ax0.set_title('bars with legend')
# Setting up the stacked bar
ax1.hist(x2, n_bins, density=1, histtype='bar', stacked=True)
ax1.set_title('stacked bar')
# Setting up the bottom-left histogram
ax2.hist(x2, n_bins, histtype='step', stacked=True, fill=False)
ax2.set_title('stack step (unfilled)')
# Make a multiple-histogram of data-sets with different length (bottom-right)
x_multi = [np.random.randn(n) for n in [10000, 5000, 2000]]
ax3.hist(x_multi, n_bins, histtype='bar')
ax3.set_title('different sample sizes')
# Display the plot
fig3.tight_layout()
fig3.canvas
fig3.canvas.toolbar_position = 'right'
fig3.canvas.toolbar_visible = False
###Output
_____no_output_____
###Markdown
Exercise **This is a slightly challenging exercise!**Create a small app which generates and displays a simulation of a stock price (you can use the helper function) and has the following widgets:1. An interactive ipympl canvas with the toolbar on the left hand side2. A slider which selects the number of steps per simulation3. A button to ghenerate new data and update the plotThe plot should update whenever there is a change to the slider value, or the button is clicked.
###Code
# Helper function
def generate_timeseries(steps):
return (np.arange(1,steps + 1,1), 100 + np.random.normal(0,1,steps).cumsum())
# Starting data
x_data, y_data = generate_timeseries(100)
# Code goes here
fig = plt.figure()
lines = plt.plot(x_data, y_data)
s1 = IntSlider(description='Steps', min=50, value=100, max=150)
b = Button(description='Generate Data')
def update_plot(change=None):
x_data, y_data = generate_timeseries(s1.value)
lines[0].set_data(x_data, y_data)
lines[0].axes.set_ylim(min(y_data) - 1, max(y_data) + 1)
lines[0].axes.set_xlim(min(x_data) - 1, max(x_data) + 1)
fig.canvas.draw()
fig.canvas.flush_events()
# Setting up an event listener for the slider value
s1.observe(update_plot, 'value')
# Set up event listener for the button
b.on_click(update_plot)
# Rendering slider and figure and a vertical box
VBox([s1, b, fig.canvas])
###Output
_____no_output_____ |
docs/bokeh/bokeh-server.ipynb | ###Markdown
Bokeh-ServerDie Architektur von Bokeh ist so, dass übergeordnete *Modellobjekte*, also Darstellungen wie Plots, Bereiche, Achsen, Glyphen usw.) in Python erstellt und dann in ein JSON-Format konvertiert werden, das von der Client-Bibliothek `BokehJS` verwendet wird. Mit Hilfe des Bokeh-Servers können die *Modellobjekte* in Python und im Browser miteinander synchronisiert werden, wodurch mächtige Funktionen geschaffen werden:* Browser-Events führen zu serverseitigen Python-Berechnungen oder -Abfragen* Automatische Push-Aktualisierung des Browser-UI (z.B. Widgets oder Plots)* Periodische, Timeout- und asynchronen Callbacks für Streaming-UpdatesDiese Funktion zur Synchronisierung zwischen serverseitigem Python und dem Browser ist der Hauptzweck des Bokeh-Servers.Es ist auch möglich, Bokeh-Anwendungen zu definieren, indem ein Standard-Python-Skript erstellt wird. In diesem Fall ist es nicht erforderlich, eine Funktion wie `modify_doc` zu erstellen. Normalerweise erstellt das Skript einfach alle Bokeh-Kontingente und fügt es mit einer Zeile dem Dokument hinzu:```curdoc().add_root(layout)```Um das Beispiel unten auszuprobieren, kopiert den Code in eine Datei `hello.py` und führt dann Folgendes aus:```pipenv run bokeh serve --show hello.py```
###Code
from bokeh.io import curdoc
from bokeh.layouts import column
from bokeh.models.widgets import TextInput, Button, Paragraph
# create some widgets
button = Button(label="Say Hi")
input = TextInput(value="Pythonistas")
output = Paragraph()
# add a callback to a widget
def update():
output.text = "Hello, " + input.value + "!"
button.on_click(update)
# create a layout for everything
layout = column(button, input, output)
# add the layout to curdoc
curdoc().add_root(layout)
###Output
_____no_output_____ |
2-Data-Analysis/1-Numpy/2-Numpy Array Indexing.ipynb | ###Markdown
NumPy Indexing und SelectionIn dieser Lektion werden wir diskutieren, wie man Elemente oder Gruppen von Elementen aus einem Array auswählt.
###Code
import numpy as np
# Ein Beispielarray erstellen
arr = np.arange(0,11)
# Anzeigen
arr
###Output
_____no_output_____
###Markdown
Indexing und Selection mit KlammernDer einfachste Weg um ein oder mehrere Element(e) aus einem Array auszuwählen sieht dem bei einer Liste sehr ähnlich:
###Code
# Wert mit seinem Index erhalten
arr[8]
# Erhalte die Werte in einem Bereich
arr[1:5]
# Erhalte die Werte in einem Bereich
arr[0:5]
###Output
_____no_output_____
###Markdown
BroadcastingNumPy Arrays unterscheiden sich von normalen Python Listen durch ihre Fähigkeit des [Broadcasting](https://docs.scipy.org/doc/numpy/user/basics.broadcasting.html).
###Code
# Einen Wert durch einen Index-Bereich festlegen (Broadcasting)
arr[0:5]=100
# Anzeigen
arr
# Das Array zurücksetzen. Warum das nötig ist sehen wir gleich
arr = np.arange(0,11)
# Anzeigen
arr
# Ein Stück des Arrays wählen
stueck_des_arr = arr[0:6]
# Anzeigen
stueck_des_arr
# Das Stück bearbeiten
stueck_des_arr[:]=99
# Das Stück erneut anzeigen
stueck_des_arr
###Output
_____no_output_____
###Markdown
Achtet darauf, wie diese Änderung auch im originalen Array auftaucht!
###Code
arr
###Output
_____no_output_____
###Markdown
Die Daten wurden hier nicht kopiert. Das erzeugte Teilstück ist eine Betrachtung des originalen Arrays. Das vermeidet Speicherprobleme.
###Code
# Um eine Kopie zu erzeugen, müssen wir das explizit anweisen
arr_kopie = arr.copy()
arr_kopie
###Output
_____no_output_____
###Markdown
Indexing in 2D Arrays (Matrizen)Das allgemeine Format ist arr_2d[row][col] oder arr_2d[row,col]. Ich empfehle normalerweise die Komma-Notation für mehr Klarheit.
###Code
arr_2d = np.array(([5,10,15],[20,25,30],[35,40,45]))
# Anzeigen
arr_2d
# Die Reihe indexieren
arr_2d[1]
# Das Format ist arr_2d[row][col] oder arr_2d[row,col]
# Einzelne Elemente auswählen
arr_2d[1][0]
# Einzelne Elemente auswählen
arr_2d[1,0]
# 2D Array Stücke auswählen
# Form (2,2) von oben rechts
arr_2d[:2,1:]
# Form untere Reihe
arr_2d[2]
# Form untere Reihe
arr_2d[2,:]
###Output
_____no_output_____
###Markdown
Raffiniertes IndexingRaffiniertes Indexing erlaubt es uns ganze Reihen oder Spalten entgegen ihrer Reihenfolge zu wählen. Um das zu verdeutlichen erstellen wir zunächst ein NumPy Array:
###Code
# Eine Matrix erstellen
arr2d = np.zeros((10,10))
# Länge des Array
arr_laenge = arr2d.shape[1]
# Das Array erstellen
for i in range(arr_laenge):
arr2d[i] = i
arr2d
###Output
_____no_output_____
###Markdown
Raffiniertes Indexing erlaubt uns nun folgendes:
###Code
arr2d[[2,4,6,8]]
# Und das in jeder Reihenfolge
arr2d[[6,4,8,2]]
###Output
_____no_output_____
###Markdown
Mehr Hilfe beim IndexingIndexing in einer 2D Matrix kann anfangs etwas verwirrend sein. Bei Google Bilder findet man nützliche Bilder, die einem dabei helfen. Bspw. das folgende: SelectionLass uns jetzt noch kurz anschauen, wie wir Klammern nutzen können, um eine Selection basieren auf Vergleichsoperatoren durchzuführen.
###Code
arr = np.arange(1,11)
arr
arr > 4
bool_arr = arr>4
bool_arr
arr[bool_arr]
arr[arr>2]
x=2
arr[arr>x]
###Output
_____no_output_____ |
queue_imbalance/svm/svm_rbf.ipynb | ###Markdown
SVM with rbf kernelThe goal of this notebook is to find the best parameters for polynomial kernel. We also want to check if the parameters depend on stock.We will use [sklearn.svm](http://scikit-learn.org/stable/modules/generated/sklearn.svm.SVC.htmlsklearn.svm.SVC) library to perform calculations. We want to pick the best parameters for **SVC**:* C (default 1.0)* gamma (default 1/number_of_features, so 1 in our case)Kernel function looks like this: $\exp(-\gamma \|x-x'\|^2)$. $\gamma$ is specified by keyword **gamma**, must be greater than 0.
###Code
%matplotlib inline
import pandas as pd
import matplotlib.pyplot as plt
import matplotlib.dates as md
from statsmodels.distributions.empirical_distribution import ECDF
import numpy as np
import seaborn as sns
from sklearn.metrics import roc_auc_score
from sklearn.metrics import roc_curve
from sklearn.metrics import classification_report
from sklearn import svm
import warnings
from lob_data_utils import lob
sns.set_style('whitegrid')
warnings.filterwarnings('ignore')
###Output
_____no_output_____
###Markdown
DataWe use data from 5 stocks (from dates 2013-09-01 - 2013-11-16) for which logistic regression yielded the best results.We selected 3 subsets for each stock:* training set (60% of data)* test set (20% of data)* cross-validation set (20% of data)
###Code
stocks = ['10795', '12098', '11618', '1243', '11234']
dfs = {}
dfs_cv = {}
dfs_test = {}
for s in stocks:
df, df_cv, df_test = lob.load_prepared_data(s, cv=True)
dfs[s] = df
dfs_cv[s] = df_cv
dfs_test[s] = df_test
dfs[stocks[0]].head(5)
def svm_classification(d, kernel, gamma='auto', C=1.0):
clf = svm.SVC(kernel=kernel, gamma=gamma, C=C)
X = d['queue_imbalance'].values.reshape(-1, 1)
y = d['mid_price_indicator'].values.reshape(-1, 1)
clf.fit(X, y)
return clf
###Output
_____no_output_____
###Markdown
MethodologyWe will use at first naive approach to grasp how each of the parameter influences the ROC area score and what values make sense, when the other parameters are set to defaults.After that we will try to get the best combination of the parameters. C parameterThe C parameter has influence over margin picked by SVM:* for large values of **C** SVM will choose a smaller-margin hyperplane, which means that more data points will be classified correctly* for small values of **C** SVM will choose a bigger-margin hyperplane, so there may be more misclassificationsAt first we tried parameters: [0.0001, 0.001, 0.01, 0.1, 1, 10, 1000], but after first calculations it seems that it wasn't enough, so a few more values were introduced or removed.
###Code
cs = [0.08, 0.09, 0.1, 0.12, 1, 1.25, 1.9, 2, 6.4, 6.5, 6.6,
7, 7.1, 107, 108,
108.5]
# 0.06, 0.07
# 6.3, 6.7
# 7.2, 7.5, 8, 9
# 109, 110, 111
df_css = {}
ax = plt.subplot()
ax.set_xscale("log", basex=10)
for s in stocks:
df_cs = pd.DataFrame(index=cs)
df_cs['roc'] = np.zeros(len(df_cs))
for c in cs:
reg_svm = svm_classification(dfs[s], 'rbf', C=c)
prediction = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1))
score = roc_auc_score(dfs_cv[s]['mid_price_indicator'], prediction)
df_cs.loc[c] = score
plt.plot(df_cs, linestyle='--', label=s, marker='x', alpha=0.5)
df_css[s] = df_cs
plt.legend()
###Output
_____no_output_____
###Markdown
Best values of C parameterThere is no rule, how to set this parameter.
###Code
for s in stocks:
idx = df_css[s]['roc'].idxmax()
print('For {} the best is {}'.format(s, idx))
###Output
For 10795 the best is 0.09
For 12098 the best is 107.0
For 11618 the best is 7.0
For 1243 the best is 0.1
For 11234 the best is 6.4
###Markdown
Influence of C parameterThe score difference between SVM with the worst choice of parameter **C** and the best choice one is shown on the output below. For scoring method we used *roc_area*.
###Code
for s in stocks:
err_max = df_css[s]['roc'].max()
err_min = df_css[s]['roc'].min()
print('For {} the diff between best and worst {}'.format(s, err_max - err_min))
###Output
For 10795 the diff between best and worst 0.004800787014264674
For 12098 the diff between best and worst 0.004854368932038833
For 11618 the diff between best and worst 0.0021372549019607057
For 1243 the diff between best and worst 0.005039215686274523
For 11234 the diff between best and worst 0.003030303030302939
###Markdown
GammaGamma is a parameter which has influence over decision region - the bigger it is, the bigger influence every single row of data has. When gamma is low the decision region is very broad. When gamma is high it can even create islands of decision-boundaries around data points.
###Code
gammas = [0.0008, 0.001, 0.09, 0.15, 0.2, 0.3, 0.4, 0.45, 0.5, 0.6, 100.5, 101, 101.5]
# 0.1
# 102
# 1, 10, 99
df_gammas = {}
ax = plt.subplot()
ax.set_xscale("log", basex=10)
for s in stocks:
df_gamma = pd.DataFrame(index=gammas)
df_gamma['roc'] = np.zeros(len(df_gamma))
for g in gammas:
reg_svm = svm_classification(dfs[s], 'rbf', gamma=g)
pred_svm_out_of_sample = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1))
logit_roc_auc = roc_auc_score(dfs_cv[s]['mid_price_indicator'], pred_svm_out_of_sample)
df_gamma.loc[g] = logit_roc_auc
plt.plot(df_gamma, linestyle='--', label=s, marker='x', alpha=0.7)
df_gammas[s] = df_gamma
plt.legend()
###Output
_____no_output_____
###Markdown
Best values of gammaThere is no rule, how to set this parameter.
###Code
for s in stocks:
idx = df_gammas[s]['roc'].idxmax()
print('For {} the best is {}'.format(s, idx))
###Output
For 10795 the best is 100.5
For 12098 the best is 0.3
For 11618 the best is 0.2
For 1243 the best is 0.5
For 11234 the best is 0.5
###Markdown
Influence of gammaThe score difference between SVM with the worst choice of **gamma** and the best choice one is shown on the output below. For scoring method we used *roc_area*. For all stocks the error difference is small - less than 0.04.
###Code
for s in stocks:
err_max = df_gammas[s]['roc'].max()
err_min = df_gammas[s]['roc'].min()
print('For {} the diff between best and worst {}'.format(s, err_max - err_min))
###Output
For 10795 the diff between best and worst 0.11365469749139212
For 12098 the diff between best and worst 0.03974698440717861
For 11618 the diff between best and worst 0.027764705882352914
For 1243 the diff between best and worst 0.033627450980392215
For 11234 the diff between best and worst 0.09537118760419727
###Markdown
ResultsWe compare results of the SVMs with the best choices of parameters against the logistic regression and SVM with defaults.We will use two approaches for choosing parameters:* naive - for each stock we will just pick the best values we found in the previous section* grid - we will caluclate roc_area error for every combination of parameters used in previous section (computionally heavy).We could also use GridSearchCV from sklearn library, but the issue with it is supplying the cross-validation set (it has to be continous in time). In the future we need to implement the method for that. Naive approachWe pick the best **C** parameter and the best **gamma** separately from the results of [section above](Methodology), which were obtained using cross-validation set.
###Code
df_results = pd.DataFrame(index=stocks)
df_results['logistic'] = np.zeros(len(stocks))
df_results['rbf-naive'] = np.zeros(len(stocks))
df_results['gamma-naive'] = np.zeros(len(stocks))
df_results['c-naive'] = np.zeros(len(stocks))
df_results['rbf-default'] = np.zeros(len(stocks))
plt.subplot(121)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'rbf', C=df_css[s]['roc'].idxmax(),
gamma=df_gammas[s]['roc'].idxmax())
roc_score = lob.plot_roc(df_test, reg_svm, stock=s, title='ROC for test set with the naive')
df_results['rbf-naive'][s] = roc_score
df_results['gamma-naive'][s] = df_gammas[s]['roc'].idxmax()
df_results['c-naive'][s] = df_css[s]['roc'].idxmax()
colors = ['b', 'g', 'r', 'c', 'm', 'y', 'k', 'w']
plt.subplot(122)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'rbf')
roc_score = lob.plot_roc(df_test, reg_svm, stock=s, title='ROC for test set with the defaults')
df_results['rbf-default'][s] = roc_score
reg_log = lob.logistic_regression(dfs[s], 0, len(dfs[s]))
roc_score = lob.plot_roc(df_test, reg_log, stock=s, title='ROC for test set with logistic',
c=colors[stocks.index(s)], linestyle='--')
df_results['logistic'][s] = roc_score
plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2)
df_results
###Output
_____no_output_____
###Markdown
Grid approachWe iterate over all combinations of parameters **C** and **gamma**.This approach works usually better, but not for all cases.
###Code
df_params = {}
for s in stocks:
print(s)
params = []
for c in cs:
for g in gammas:
reg_svm = svm_classification(dfs[s], 'rbf', C=c, gamma=g)
prediction = reg_svm.predict(dfs_cv[s]['queue_imbalance'].values.reshape(-1, 1))
score = roc_auc_score(dfs_cv[s]['mid_price_indicator'], prediction)
params.append({'score': score, 'gamma': g, 'c': c})
df_params[s] = pd.DataFrame(params)
for s in stocks:
df_g = df_params[s].pivot(index='c', columns='gamma', values='score')
sns.heatmap(df_g)
plt.title('Best params for ' + s)
plt.figure()
###Output
_____no_output_____
###Markdown
Best parameters for grid approach
###Code
for s in stocks:
print(s, df_params[s].iloc[df_params[s]['score'].idxmax()])
df_results['rbf-grid'] = np.zeros(len(stocks))
df_results['c-grid'] = np.zeros(len(stocks))
df_results['gamma-grid'] = np.zeros(len(stocks))
plt.subplot(121)
for s in stocks:
best_idx = df_params[s]['score'].idxmax()
reg_svm = svm_classification(dfs[s], 'rbf', C=df_params[s].iloc[best_idx]['c'],
gamma=df_params[s].iloc[best_idx]['gamma'])
roc_score = lob.plot_roc(df_test, reg_svm, stock=s, title='ROC for test set with the best params')
df_results['rbf-grid'][s] = roc_score
df_results['gamma-grid'][s] = df_params[s].iloc[best_idx]['gamma']
df_results['c-grid'][s] = df_params[s].iloc[best_idx]['c']
plt.subplot(122)
for s in stocks:
reg_svm = svm_classification(dfs[s], 'rbf')
prediction = reg_svm.predict(dfs_test[s]['queue_imbalance'].values.reshape(-1, 1))
roc_score = lob.plot_roc(df_test, reg_svm, stock=s, title='ROC for test set with defaults')
df_results['rbf-default'][s] = roc_score
plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2)
plt.subplot(121)
for s in stocks:
best_idx = df_params[s]['score'].idxmax()
reg_svm = svm_classification(dfs[s], 'rbf', C=df_params[s].iloc[best_idx]['c'],
gamma=df_params[s].iloc[best_idx]['gamma'])
roc_score = lob.plot_roc(df_test, reg_svm, stock=s, title='ROC for test set with the best params')
df_results['rbf-grid'][s] = roc_score
plt.subplot(122)
for s in stocks:
reg_log = lob.logistic_regression(dfs[s], 0, len(dfs[s]))
roc_score = lob.plot_roc(df_test, reg_log, stock=s, title='ROC for test set with the best params')
df_results['logistic'][s] = roc_score
plt.subplots_adjust(left=0, wspace=0.1, top=1, right=2)
df_results[['logistic', 'rbf-naive', 'rbf-default', 'rbf-grid']]
df_results
###Output
_____no_output_____ |
content/ch-algorithms/quantum-fourier-transform.ipynb | ###Markdown
Quantum Fourier Transform In this tutorial, we introduce the quantum fourier transform (QFT), derive the circuit, and implement it using Qiskit. We show how to run QFT on a simulator and a five qubit device. Contents1. [Introduction](introduction)2. [Example 1: 1-qubit QFT](example1)3. [The Quantum Fourier transform](qfteqn)4. [The circuit that implements QFT](circuit)5. [Example 2: 3-qubit QFT](example1)6. [A note about the form of the QFT circuit](formnote)7. [Qiskit Implementation](implementation) - [Running QFT on a simulator](implementationsim) - [Running QFT on a real quantum device](implementationdev)8. [Problems](problems)9. [References](references) 1. Introduction The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT) is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction. It is part of many quantum algorithms, most notably Shor's factoring algorithm and quantum phase estimation. The discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$.Similarly, the quantum Fourier transform acts on a quantum state $\sum_{i=0}^{N-1} x_i \vert i \rangle$ and maps it to the quantum state $\sum_{i=0}^{N-1} y_i \vert i \rangle$ according to the formula$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation.This can also be expressed as the map:$$\vert x \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle$$Or the unitary matrix:$$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} \omega_N^{xy} \vert y \rangle \langle x \vert$$ 2. Example 1: 1-qubit QFT Consider how the QFT operator as defined above acts on a single qubit state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$. In this case, $x_0 = \alpha$, $x_1 = \beta$, and $N = 2$. Then,$$y_0 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times0}{2}\right) + \beta \exp\left(2\pi i\frac{1\times0}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha + \beta\right)$$and$$y_1 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times1}{2}\right) + \beta \exp\left(2\pi i\frac{1\times1}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha - \beta\right)$$such that the final result is the state $$U_{QFT}\vert\psi\rangle = \frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle$$This operation is exactly the result of applying the Hadamard operator ($H$) on the qubit:$$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$If we apply the $H$ operator to the state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$, we obtain the new state:$$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle \equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state. 3. The Quantum Fourier transform So what does the quantum Fourier transform look like for larger $N$? Let's derive a circuit for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1\ldots x_n \rangle$ where $x_1$ is the most significant bit.\begin{aligned}QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle ~\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 \ldots y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1\ldots y_n, y/2^n = \sum_{k=1}^n y_k/2^k \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=1}^n e^{2 \pi i x y_k/2^k } \vert y_1 \ldots y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} \\& = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} \sum_{y=0}^{N-1} = \sum_{y_1=0}^{1}\sum_{y_2=0}^{1}\ldots\sum_{y_n=0}^{1} \\& = \frac{1}{\sqrt{N}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) \end{aligned} 4. The circuit that implements QFT The circuit that implements QFT makes use of two gates. The first one is a single-qubit Hadamard gate, $H$, that you already know. From the discussion in [Example 1](example1) above, you have already seen that the action of $H$ on the single-qubit state $\vert x_k\rangle$ is$$H\vert x_k \rangle = \vert0\rangle + \exp\left(\frac{2\pi i}{2}x_k\right)\vert1\rangle$$The second is a two-qubit controlled rotation $CROT_k$ given in block-diagonal form as $$CROT_k = \left[\begin{matrix}I&0\\0&UROT_k\\\end{matrix}\right]$$where $$UROT_k = \left[\begin{matrix}1&0\\0&\exp\left(\frac{2\pi i}{2^k}\right)\\\end{matrix}\right]$$The action of $CROT_k$ on the two-qubit state $\vert x_jx_k\rangle$ where the first qubit is the control and the second is the target is given by$$CROT_k\vert x_j0\rangle = \vert x_j0\rangle$$and$$CROT_k\vert x_j1\rangle = \exp\left( \frac{2\pi i}{2^k}x_j \right)\vert x_j1\rangle$$Given these two gates, a circuit that implements [an n-qubit QFT](qfteqn) is shown below.The circuit operates as follows. We start with an n-qubit input state $\vert x_1x_2\ldots x_n\rangle$. After the first Hadamard gate on qubit 1, the state is transformed from the input state to $$H_1\vert x_1x_2\ldots x_n\rangle = \frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the $CROT_2$ gate on qubit 1 controlled by qubit 2, the state is transformed to$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the application of the last $CROT_n$ gate on qubit 1 controlled by qubit $n$, the state becomes$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x_n + \frac{2\pi i}{2^{n-1}}x_{n-1} + \ldots + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$Noting that $$x = 2^{n-1}x_1 + 2^{n-2}x_2 + \ldots + 2^1x_{n-1} + 2^0x_n$$we can write the above state as $$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x \right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the application of a similar sequence of gates for qubits $2\ldots n$, we find the final state to be$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x \right)\vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{n-1}}x \right)\vert1\rangle\right]\otimes\ldots\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{2}}x \right)\vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{1}}x \right)\vert1\rangle\right]$$which is exactly the QFT of the input state as derived above with the caveat that the order of the qubits is reversed in the output state. 5. Example 2: 3-qubit QFT The steps to creating the circuit for $\vert y_1y_2y_3\rangle = QFT_8\vert x_1x_2x_3\rangle$ would be: Apply a Hadamard gate to $\vert x_3 \rangle$$$\psi_1 = \vert x_1\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a $CROT_2$ gate to $\vert x_3\rangle$ depending on $\vert x_2\rangle$$$\psi_2 = \vert x_1\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a $CROT_3$ gate to $\vert x_3\rangle$ depending on $\vert x_1\rangle$$$\psi_3 = \vert x_1\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a Hadamard gate to $\vert x_2 \rangle$$$\psi_4 = \vert x_1\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a $CROT_2$ gate to $\vert x_2\rangle$ depending on $\vert x_1\rangle$$$\psi_5 = \vert x_1\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_1 + \frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a Hadamard gate to $\vert x_1\rangle$$$\psi_6 = \frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_1\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_1 + \frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Keep in mind the reverse order of the output state relative to the desired QFT. Therefore, measure the bits in reverse order, that is $y_3 = x_1, y_2 = x_2, y_1 = x_3$. 6. A note about the form of the QFT circuit The example above demonstrates a very useful form of the QFT for $N=2^n$. Note that only the last qubit depends on the values of all the other input qubits and each further bit depends less and less on the input qubits. This becomes important in physical implementations of the QFT, where nearest-neighbor couplings are easier to achieve than distant couplings between qubits. 7. Qiskit ImplementationIn Qiskit, the implementation of the $CROT$ gate used in the discussion above is a controlled phase rotation gate. This gate is defined in [OpenQASM](https://github.com/QISKit/openqasm) as$$CU_1(\theta) =\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i\theta}\end{bmatrix}$$Hence, the mapping from the $CROT_k$ gate in the discussion above into the $CU_1$ gate is found from the equation$$\theta = 2\pi/2^k = \pi/2^{k-1}$$It is instructive to write out the relevant code for the 3-qubit case before generalizing to the $n$-qubit case. In Qiskit, it is:```qft3 = QuantumCircuit(3, 3)qft3.h(0)qft3.cu1(math.pi/2.0, 1, 0) CROT_2 from qubit 1 to qubit 0qft3.cu1(math.pi/4.0, 2, 0) CROT_3 from qubit 2 to qubit 0qft3.h(q[1])qft3.cu1(math.pi/2.0, 2, 1) CROT_2 from qubit 2 to qubit 1qft3.h(2)```Following the above example, the case for $n$ qubits can be generalized as:```def qft(circ, n): """n-qubit QFT on the qubits in circ.""" for j in range(n): circ.h(j) for k in range(j+1,n): circ.cu1(math.pi/float(2**(k-j)), k, j)``` We will now implement the three-qubit QFT as discussed above. We first create a state whose QFT is known. The output after a QFT is applied to this special state is $\vert001\rangle$.
###Code
import numpy as np
pi = np.pi
# importing Qiskit
from qiskit import BasicAer, IBMQ
from qiskit import QuantumCircuit, execute
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
from qiskit.visualization import plot_histogram
###Output
_____no_output_____
###Markdown
First let's define the QFT function, as well as a function that creates a state from which a QFT will return 001:
###Code
def input_state(circ, n):
"""special n-qubit input state for QFT that produces output 1."""
for j in range(n):
circ.h(j)
circ.u1(-pi/float(2**(j)), j)
def qft(circ, n):
"""n-qubit QFT on the qubits in circ."""
for j in range(n):
circ.h(j)
for k in range(j+1,n):
circ.cu1(pi/float(2**(k-j)), k, j)
circ.barrier()
swap_registers(circ, n)
def swap_registers(circ, n):
for j in range(int(np.floor(n/2.))):
circ.swap(j, n-j-1)
return circ
###Output
_____no_output_____
###Markdown
Let's now implement a QFT on a prepared three qubit input state that should return $001$:
###Code
n = 3
qft_circuit = QuantumCircuit(n)
# first, prepare the state that should return 001 and draw that circuit
input_state(qft_circuit, n)
qft_circuit.draw(output='mpl')
# next, do a qft on the prepared state and draw the entire circuit
qft_circuit.barrier()
qft(qft_circuit, n)
qft_circuit.measure_all()
qft_circuit.draw(output='mpl')
###Output
_____no_output_____
###Markdown
7a. Running QFT on a simulator
###Code
# run on local simulator
backend = BasicAer.get_backend("qasm_simulator")
simulate = execute(qft_circuit, backend=backend, shots=1024).result()
simulate.get_counts()
###Output
_____no_output_____
###Markdown
We indeed see that the outcome is always $001$ when we execute the code on the simulator. Note the reversed order of the output value $100$ compared to the expected value $001$. We expected this as well, since the output register contains the reversed QFT values. 7b. Running QFT on a real quantum device We then see how the same circuit can be executed on real-device backends.
###Code
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to n qubits
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= n and
not x.configuration().simulator and x.status().operational==True))
print("least busy backend: ", backend)
shots = 2048
job_exp = execute(qft_circuit, backend=backend, shots=shots)
job_monitor(job_exp)
results = job_exp.result()
plot_histogram(results.get_counts())
###Output
_____no_output_____
###Markdown
We see that the highest probability outcome is still $100$ on a real device. Recall again that the output of the QFT circuit has the qubits in reverse order. 8. Problems 1. The [above implementation](implementation) of QFT was tested by using a special input state for which QFT(input state) = 001. Implement an input state for which QFT(input state) = 100.2. The [above implementation](implementation) of QFT was tested by using a special input state for which QFT(input state) = 001. Implement an input state for which QFT(input state) = 101. 9. References 1. M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000).
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____
###Markdown
Quantum Fourier Transform In this tutorial, we introduce the quantum fourier transform (QFT), derive the circuit, and implement it using Qiskit. We show how to run QFT on a simulator and a five qubit device. Contents1. [Introduction](introduction)2. [Intuition](intuition) 2.1 [Counting in the Fourier Basis](counting-fourier) 3. [Example 1: 1-qubit QFT](example1)4. [The Quantum Fourier transform](qfteqn)5. [The Circuit that Implements the QFT](circuit)6. [Example 2: 3-qubit QFT](example2)7. [Some Notes About the Form of the QFT Circuit](formnote)8. [Qiskit Implementation](implementation) 8.1 [Example on 3 Qubits](threeqft) 8.2 [General QFT Function](generalqft) 8.3 [Running QFT on a Real Quantum Device](implementationdev) 9. [Problems](problems)10. [References](references) 1. Introduction The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT) is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction. It is part of many quantum algorithms, most notably Shor's factoring algorithm and quantum phase estimation. The discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$.Similarly, the quantum Fourier transform acts on a quantum state $\vert X\rangle = \sum_{j=0}^{N-1} x_j \vert j \rangle$ and maps it to the quantum state $\vert Y\rangle = \sum_{k=0}^{N-1} y_k \vert k \rangle$ according to the formula$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation.This can also be expressed as the map:$$\vert j \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{k=0}^{N-1}\omega_N^{jk} \vert k \rangle$$Or the unitary matrix:$$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{j=0}^{N-1} \sum_{k=0}^{N-1} \omega_N^{jk} \vert k \rangle \langle j \vert$$ 2. Intuition The quantum Fourier transform (QFT) transforms between two bases, the computational (Z) basis, and the Fourier basis. The H-gate is the single-qubit QFT, and it transforms between the Z-basis states $|0\rangle$ and $|1\rangle$ to the X-basis states $|{+}\rangle$ and $|{-}\rangle$. In the same way, all multi-qubit states in the computational basis have corresponding states in the Fourier basis. The QFT is simply the function that transforms between these bases.$$|\text{State in Computational Basis}\rangle \quad \xrightarrow[]{\text{QFT}} \quad |\text{State in Fourier Basis}\rangle$$$$\text{QFT}|x\rangle = |\widetilde{x}\rangle$$(We often note states in the Fourier basis using the tilde (~)). 2.1 Counting in the Fourier basis: In the computational basis, we store numbers in binary using the states $|0\rangle$ and $|1\rangle$:![zbasiscounting](images/zbasis-counting.gif)Note the frequency with which the different qubits change; the leftmost qubit flips with every increment in the number, the next with every 2 increments, the third with every 4 increments, and so on. In the Fourier basis, we store numbers using different rotations around the Z-axis:![fbasiscounting](images/fourierbasis-counting.gif)The number we want to store dictates the angle at which each qubit is rotated around the Z-axis. In the state $|\widetilde{0}\rangle$, all qubits are in the state $|{+}\rangle$. As seen in the example above, to encode the state $|\widetilde{5}\rangle$ on 4 qubits, we rotated the leftmost qubit by $\tfrac{5}{2^n} = \tfrac{5}{16}$ full turns ($\tfrac{5}{16}\times 2\pi$ radians). The next qubit is turned double this ($\tfrac{10}{16}\times 2\pi$ radians, or $10/16$ full turns), this angle is then doubled for the qubit after, and so on. Again, note the frequency with which each qubit changes. The leftmost qubit (`qubit 0`) in this case has the lowest frequency, and the rightmost the highest. 3. Example 1: 1-qubit QFT Consider how the QFT operator as defined above acts on a single qubit state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$. In this case, $x_0 = \alpha$, $x_1 = \beta$, and $N = 2$. Then,$$y_0 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times0}{2}\right) + \beta \exp\left(2\pi i\frac{1\times0}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha + \beta\right)$$and$$y_1 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times1}{2}\right) + \beta \exp\left(2\pi i\frac{1\times1}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha - \beta\right)$$such that the final result is the state $$U_{QFT}\vert\psi\rangle = \frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle$$This operation is exactly the result of applying the Hadamard operator ($H$) on the qubit:$$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$If we apply the $H$ operator to the state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$, we obtain the new state:$$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle \equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state. 4. The Quantum Fourier transform So what does the quantum Fourier transform look like for larger $N$? Let's derive a transformation for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1\ldots x_n \rangle$ where $x_1$ is the most significant bit. This maths is here for those that find it useful, if you struggle with it then don’t worry; as long as you understand the intuition in section 2 then you can continue straight to the next section.$$\begin{aligned}QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle ~\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 \ldots y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1\ldots y_n, y/2^n = \sum_{k=1}^n y_k/2^k \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=1}^n e^{2 \pi i x y_k/2^k } \vert y_1 \ldots y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} \\& = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} \sum_{y=0}^{N-1} = \sum_{y_1=0}^{1}\sum_{y_2=0}^{1}\ldots\sum_{y_n=0}^{1} \\& = \frac{1}{\sqrt{N}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) \end{aligned}$$This is a mathematical description of the animation we saw in the intuition section:![fbasiscounting](images/fourierbasis-counting.gif) 5. The Circuit that Implements the QFT The circuit that implements QFT makes use of two gates. The first one is a single-qubit Hadamard gate, $H$, that you already know. From the discussion in [Example 1](example1) above, you have already seen that the action of $H$ on the single-qubit state $\vert x_k\rangle$ is$$H\vert x_k \rangle = \frac{1}{\sqrt{2}}\left(\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_k\right)\vert1\rangle\right)$$The second is a two-qubit controlled rotation $CROT_k$ given in block-diagonal form as $$CROT_k = \left[\begin{matrix}I&0\\0&UROT_k\\\end{matrix}\right]$$where $$UROT_k = \left[\begin{matrix}1&0\\0&\exp\left(\frac{2\pi i}{2^k}\right)\\\end{matrix}\right]$$The action of $CROT_k$ on a two-qubit state $\vert x_l x_j\rangle$ where the first qubit is the control and the second is the target is given by$$CROT_k\vert 0x_j\rangle = \vert 0x_j\rangle$$and$$CROT_k\vert 1x_j\rangle = \exp\left( \frac{2\pi i}{2^k}x_j \right)\vert 1x_j\rangle$$Given these two gates, a circuit that implements [an n-qubit QFT](qfteqn) is shown below.![image1](images/qft.png)The circuit operates as follows. We start with an n-qubit input state $\vert x_1x_2\ldots x_n\rangle$. After the first Hadamard gate on qubit 1, the state is transformed from the input state to $$H_1\vert x_1x_2\ldots x_n\rangle = \frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the $UROT_2$ gate on qubit 1 controlled by qubit 2, the state is transformed to$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the application of the last $UROT_n$ gate on qubit 1 controlled by qubit $n$, the state becomes$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x_n + \frac{2\pi i}{2^{n-1}}x_{n-1} + \ldots + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$Noting that $$x = 2^{n-1}x_1 + 2^{n-2}x_2 + \ldots + 2^1x_{n-1} + 2^0x_n$$we can write the above state as $$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x \right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the application of a similar sequence of gates for qubits $2\ldots n$, we find the final state to be:$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x \right)\vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{n-1}}x \right)\vert1\rangle\right]\otimes\ldots\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{2}}x \right)\vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{1}}x \right)\vert1\rangle\right]$$which is exactly the QFT of the input state as derived above with the caveat that the order of the qubits is reversed in the output state. 6. Example 2: 3-qubit QFT The steps to creating the circuit for $\vert y_3y_2y_1\rangle = QFT_8\vert x_3x_2x_1\rangle$ would be: Apply a Hadamard gate to $\vert x_1 \rangle$$$|\psi_1\rangle = \vert x_3\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_1\right) \vert1\rangle\right]$$ Apply a $UROT_2$ gate to $\vert x_1\rangle$ depending on $\vert x_2\rangle$$$|\psi_2\rangle = \vert x_3\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right) \vert1\rangle\right]$$ Apply a $UROT_3$ gate to $\vert x_1\rangle$ depending on $\vert x_3\rangle$$$|\psi_3\rangle = \vert x_3\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right) \vert1\rangle\right]$$ Apply a Hadamard gate to $\vert x_2 \rangle$$$|\psi_4\rangle = \vert x_3\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right) \vert1\rangle\right]$$ Apply a $UROT_2$ gate to $\vert x_2\rangle$ depending on $\vert x_3\rangle$$$|\psi_5\rangle = \vert x_3\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_3 + \frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right) \vert1\rangle\right]$$ Apply a Hadamard gate to $\vert x_3\rangle$$$|\psi_6\rangle = \frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_3\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_3 + \frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_3 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right) \vert1\rangle\right]$$ Keep in mind the reverse order of the output state relative to the desired QFT. Therefore, we must reverse the order of the qubits (in this case swap $y_1$ and $y_3$). 7. Some Notes About the Form of the QFT Circuit The example above demonstrates a very useful form of the QFT for $N=2^n$. Note that only the last qubit depends on the values of all the other input qubits and each further bit depends less and less on the input qubits. This becomes important in physical implementations of the QFT, where nearest-neighbor couplings are easier to achieve than distant couplings between qubits.Additionally, as the QFT circuit becomes large, an increasing amount of time is spent doing increasingly slight rotations. It turns out that we can ignore rotations below a certain threshold and still get decent results, this is known as the approximate QFT. This is also important in physical implementations, as reducing the number of operations can greatly reduce decoherence and potential gate errors. 8. Qiskit ImplementationIn Qiskit, the implementation of the $CROT$ gate used in the discussion above is a controlled phase rotation gate. This gate is defined in [OpenQASM](https://github.com/QISKit/openqasm) as$$CP(\theta) =\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i\theta}\end{bmatrix}$$Hence, the mapping from the $CROT_k$ gate in the discussion above into the $CP$ gate is found from the equation$$\theta = 2\pi/2^k = \pi/2^{k-1}$$ 8.1 Example on 3 Qubits
###Code
import numpy as np
from numpy import pi
# importing Qiskit
from qiskit import QuantumCircuit, transpile, assemble, Aer, IBMQ
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
from qiskit.visualization import plot_histogram, plot_bloch_multivector
###Output
_____no_output_____
###Markdown
It is useful to work out the relevant code for the 3-qubit case before generalizing to the $n$-qubit case. First, we must define our quantum circuit:
###Code
qc = QuantumCircuit(3)
###Output
_____no_output_____
###Markdown
**Note**: Remember that Qiskit's least significant bit has the lowest index (0), thus the circuit will be mirrored through the horizontal in relation to the image in section 5. First, we apply a H-gate to qubit 2 :
###Code
qc.h(2)
qc.draw()
###Output
_____no_output_____
###Markdown
Next, we want to turn this an extra quarter turn if qubit 1 is in the state $|1\rangle$:
###Code
qc.cp(pi/2, 1, 2) # CROT from qubit 1 to qubit 2
qc.draw()
###Output
_____no_output_____
###Markdown
And another eighth turn if the least significant qubit (0) is $|1\rangle$:
###Code
qc.cp(pi/4, 0, 2) # CROT from qubit 2 to qubit 0
qc.draw()
###Output
_____no_output_____
###Markdown
With that qubit taken care of, we can now ignore it and repeat the process, using the same logic for qubits 0 and 1:
###Code
qc.h(1)
qc.cp(pi/2, 0, 1) # CROT from qubit 0 to qubit 1
qc.h(0)
qc.draw()
###Output
_____no_output_____
###Markdown
Finally we must swap the qubits 0 and 2 to complete the QFT:
###Code
qc.swap(0,2)
qc.draw()
###Output
_____no_output_____
###Markdown
8.2 General QFT Function We will now create a general circuit for the QFT in Qiskit. Creating large general circuits like this is really where Qiskit shines. It is easier to build a circuit that implements the QFT with the qubits upside down, then swap them afterwards; we will start off by creating the function that rotates our qubits correctly. Let’s start as we did with the 3 qubit example, by correctly rotating the most significant qubit (the qubit with the highest index):
###Code
def qft_rotations(circuit, n):
if n == 0: # Exit function if circuit is empty
return circuit
n -= 1 # Indexes start from 0
circuit.h(n) # Apply the H-gate to the most significant qubit
for qubit in range(n):
# For each less significant qubit, we need to do a
# smaller-angled controlled rotation:
circuit.cp(pi/2**(n-qubit), qubit, n)
###Output
_____no_output_____
###Markdown
Let’s see how this looks:
###Code
qc = QuantumCircuit(4)
qft_rotations(qc,4)
qc.draw()
###Output
_____no_output_____
###Markdown
We can use the widget below to see how this circuit scales with the number of qubits in our circuit:
###Code
from qiskit_textbook.widgets import scalable_circuit
scalable_circuit(qft_rotations)
###Output
_____no_output_____
###Markdown
Great! This is the first part of our QFT. Now we have correctly rotated the most significant qubit, we need to correctly rotate the second most significant qubit. Then we must deal with the third most significant, and so on. But why write more code? When we get to the end of our `qft_rotations()` function, we can use the same code to repeat the process on the next `n-1` qubits:
###Code
def qft_rotations(circuit, n):
"""Performs qft on the first n qubits in circuit (without swaps)"""
if n == 0:
return circuit
n -= 1
circuit.h(n)
for qubit in range(n):
circuit.cp(pi/2**(n-qubit), qubit, n)
# At the end of our function, we call the same function again on
# the next qubits (we reduced n by one earlier in the function)
qft_rotations(circuit, n)
# Let's see how it looks:
qc = QuantumCircuit(4)
qft_rotations(qc,4)
qc.draw()
###Output
_____no_output_____
###Markdown
That was easy! Process in which a function calls itself directly or indirectly is called _recursion._ It can greatly simplify code. We can again see how this scales using the widget below:
###Code
scalable_circuit(qft_rotations)
###Output
_____no_output_____
###Markdown
Finally, we need to add the swaps at the end of the QFT function to match the definition of the QFT. We will combine this into the final function `qft()`:
###Code
def swap_registers(circuit, n):
for qubit in range(n//2):
circuit.swap(qubit, n-qubit-1)
return circuit
def qft(circuit, n):
"""QFT on the first n qubits in circuit"""
qft_rotations(circuit, n)
swap_registers(circuit, n)
return circuit
# Let's see how it looks:
qc = QuantumCircuit(4)
qft(qc,4)
qc.draw()
###Output
_____no_output_____
###Markdown
This is the generalised circuit for the quantum Fourier transform. We can again see how this scales using the widget below:
###Code
scalable_circuit(qft)
###Output
_____no_output_____
###Markdown
We now want to demonstrate this circuit works correctly. To do this we must first encode a number in the computational basis. We can see the number 5 in binary is `101`:
###Code
bin(5)
###Output
_____no_output_____
###Markdown
(The `0b` just reminds us this is a binary number). Let's encode this into our qubits:
###Code
# Create the circuit
qc = QuantumCircuit(3)
# Encode the state 5
qc.x(0)
qc.x(2)
qc.draw()
###Output
_____no_output_____
###Markdown
And let's check the qubit's states using the aer simulator:
###Code
sim = Aer.get_backend("aer_simulator")
qc_init = qc.copy()
qc_init.save_statevector()
statevector = sim.run(qc_init).result().get_statevector()
plot_bloch_multivector(statevector)
###Output
_____no_output_____
###Markdown
Finally, let's use our QFT function and view the final state of our qubits:
###Code
qft(qc,3)
qc.draw()
qc.save_statevector()
statevector = sim.run(qc).result().get_statevector()
plot_bloch_multivector(statevector)
###Output
_____no_output_____
###Markdown
We can see out QFT function has worked correctly. Compared the state $|\widetilde{0}\rangle = |{+}{+}{+}\rangle$, Qubit 0 has been rotated by $\tfrac{5}{8}$ of a full turn, qubit 1 by $\tfrac{10}{8}$ full turns (equivalent to $\tfrac{1}{4}$ of a full turn), and qubit 2 by $\tfrac{20}{8}$ full turns (equivalent to $\tfrac{1}{2}$ of a full turn). 8.3 Running QFT on a Real Quantum Device If we tried running the circuit at the end of section 8.2 on a real device, the results would be completely random, since all qubits are in equal superposition of $|0\rangle$ and $|1\rangle$. If we want to demonstrate and investigate the QFT working on real hardware, we can instead create the state $|\widetilde{5}\rangle$ seen at the end of section 8.2, run the QFT in reverse, and verify the output is the state $|5\rangle$ as expected. Firstly, let’s use Qiskit to easily reverse our QFT operation:
###Code
def inverse_qft(circuit, n):
"""Does the inverse QFT on the first n qubits in circuit"""
# First we create a QFT circuit of the correct size:
qft_circ = qft(QuantumCircuit(n), n)
# Then we take the inverse of this circuit
invqft_circ = qft_circ.inverse()
# And add it to the first n qubits in our existing circuit
circuit.append(invqft_circ, circuit.qubits[:n])
return circuit.decompose() # .decompose() allows us to see the individual gates
###Output
_____no_output_____
###Markdown
Now let's put our qubits in the state $|\widetilde{5}\rangle$:
###Code
nqubits = 3
number = 5
qc = QuantumCircuit(nqubits)
for qubit in range(nqubits):
qc.h(qubit)
qc.p(number*pi/4,0)
qc.p(number*pi/2,1)
qc.p(number*pi,2)
qc.draw()
###Output
_____no_output_____
###Markdown
And we can see this does indeed result in the Fourier state $|\widetilde{5}\rangle$:
###Code
qc_init = qc.copy()
qc_init.save_statevector()
sim = Aer.get_backend("aer_simulator")
statevector = sim.run(qc_init).result().get_statevector()
plot_bloch_multivector(statevector)
###Output
_____no_output_____
###Markdown
Finally, let's apply our inverse QFT:
###Code
qc = inverse_qft(qc, nqubits)
qc.measure_all()
qc.draw()
# Load our saved IBMQ accounts and get the least busy backend device with less than or equal to nqubits
IBMQ.load_account()
provider = IBMQ.get_provider(hub='ibm-q')
backend = least_busy(provider.backends(filters=lambda x: x.configuration().n_qubits >= nqubits
and not x.configuration().simulator
and x.status().operational==True))
print("least busy backend: ", backend)
shots = 2048
transpiled_qc = transpile(qc, backend, optimization_level=3)
job = backend.run(transpiled_qc, shots=shots)
job_monitor(job)
counts = job.result().get_counts()
plot_histogram(counts)
###Output
_____no_output_____
###Markdown
We (hopefully) see that the highest probability outcome is $101$. 9. Problems 1. The [above implementation](implementationdev) of QFT was tested by preparing the Fourier state $|\widetilde{5}\rangle$ for which $\text{QFT}^{\dagger}|\widetilde{5}\rangle = |101\rangle$. Try to find the state $|a\rangle$ such that $\text{QFT}^{\dagger}|a\rangle = |100\rangle$.2. Find the state $|b\rangle$ such that $\text{QFT}^{\dagger}|b\rangle = |011\rangle$.3. Try to write the QFT function without recursion. Use Qiskit's unitary simulator to verify your results. 10. References 1. M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000).
###Code
import qiskit.tools.jupyter
%qiskit_version_table
###Output
_____no_output_____
###Markdown
Quantum Fourier Transform In this tutorial, we introduce the quantum fourier transform (QFT), derive the circuit, and implement it using Qiskit. We show how to run QFT on a simulator and a five qubit device. Contents1. [Introduction](introduction)2. [Example 1: 1-qubit QFT](example1)3. [The Quantum Fourier transform](qfteqn)4. [The circuit that implements QFT](circuit)5. [Example 2: 3-qubit QFT](example1)6. [A note about the form of the QFT circuit](formnote)7. [Qiskit Implementation](implementation) - [Running QFT on a simulator](implementationsim) - [Running QFT on a real quantum device](implementationdev)8. [Problems](problems)9. [References](references) 1. Introduction The Fourier transform occurs in many different versions throughout classical computing, in areas ranging from signal processing to data compression to complexity theory. The quantum Fourier transform (QFT) is the quantum implementation of the discrete Fourier transform over the amplitudes of a wavefunction. It is part of many quantum algorithms, most notably Shor's factoring algorithm and quantum phase estimation. The discrete Fourier transform acts on a vector $(x_0, ..., x_{N-1})$ and maps it to the vector $(y_0, ..., y_{N-1})$ according to the formula$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$where $\omega_N^{jk} = e^{2\pi i \frac{jk}{N}}$.Similarly, the quantum Fourier transform acts on a quantum state $\sum_{i=0}^{N-1} x_i \vert i \rangle$ and maps it to the quantum state $\sum_{i=0}^{N-1} y_i \vert i \rangle$ according to the formula$$y_k = \frac{1}{\sqrt{N}}\sum_{j=0}^{N-1}x_j\omega_N^{jk}$$with $\omega_N^{jk}$ defined as above. Note that only the amplitudes of the state were affected by this transformation.This can also be expressed as the map:$$\vert x \rangle \mapsto \frac{1}{\sqrt{N}}\sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle$$Or the unitary matrix:$$ U_{QFT} = \frac{1}{\sqrt{N}} \sum_{x=0}^{N-1} \sum_{y=0}^{N-1} \omega_N^{xy} \vert y \rangle \langle x \vert$$ 2. Example 1: 1-qubit QFT Consider how the QFT operator as defined above acts on a single qubit state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$. In this case, $x_0 = \alpha$, $x_1 = \beta$, and $N = 2$. Then,$$y_0 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times0}{2}\right) + \beta \exp\left(2\pi i\frac{1\times0}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha + \beta\right)$$and$$y_1 = \frac{1}{\sqrt{2}}\left( \alpha \exp\left(2\pi i\frac{0\times1}{2}\right) + \beta \exp\left(2\pi i\frac{1\times1}{2}\right) \right) = \frac{1}{\sqrt{2}}\left(\alpha - \beta\right)$$such that the final result is the state $$U_{QFT}\vert\psi\rangle = \frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle$$This operation is exactly the result of applying the Hadamard operator ($H$) on the qubit:$$H = \frac{1}{\sqrt{2}}\begin{bmatrix} 1 & 1 \\ 1 & -1 \end{bmatrix}$$If we apply the $H$ operator to the state $\vert\psi\rangle = \alpha \vert 0 \rangle + \beta \vert 1 \rangle$, we obtain the new state:$$\frac{1}{\sqrt{2}}(\alpha + \beta) \vert 0 \rangle + \frac{1}{\sqrt{2}}(\alpha - \beta) \vert 1 \rangle \equiv \tilde{\alpha}\vert 0 \rangle + \tilde{\beta}\vert 1 \rangle$$Notice how the Hadamard gate performs the discrete Fourier transform for $N = 2$ on the amplitudes of the state. 3. The Quantum Fourier transform So what does the quantum Fourier transform look like for larger $N$? Let's derive a circuit for $N=2^n$, $QFT_N$ acting on the state $\vert x \rangle = \vert x_1\ldots x_n \rangle$ where $x_1$ is the most significant bit.\begin{aligned}QFT_N\vert x \rangle & = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1}\omega_N^{xy} \vert y \rangle \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i xy / 2^n} \vert y \rangle ~\text{since}\: \omega_N^{xy} = e^{2\pi i \frac{xy}{N}} \:\text{and}\: N = 2^n \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} e^{2 \pi i \left(\sum_{k=1}^n y_k/2^k\right) x} \vert y_1 \ldots y_n \rangle \:\text{rewriting in fractional binary notation}\: y = y_1\ldots y_n, y/2^n = \sum_{k=1}^n y_k/2^k \\& = \frac{1}{\sqrt{N}} \sum_{y=0}^{N-1} \prod_{k=1}^n e^{2 \pi i x y_k/2^k } \vert y_1 \ldots y_n \rangle \:\text{after expanding the exponential of a sum to a product of exponentials} \\& = \frac{1}{\sqrt{N}} \bigotimes_{k=1}^n \left(\vert0\rangle + e^{2 \pi i x /2^k } \vert1\rangle \right) \:\text{after rearranging the sum and products, and expanding} \sum_{y=0}^{N-1} = \sum_{y_1=0}^{1}\sum_{y_2=0}^{1}\ldots\sum_{y_n=0}^{1} \\& = \frac{1}{\sqrt{N}}\left(\vert0\rangle + e^{\frac{2\pi i}{2}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^2}x} \vert1\rangle\right) \otimes \ldots\otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^{n-1}}x} \vert1\rangle\right) \otimes\left(\vert0\rangle + e^{\frac{2\pi i}{2^n}x} \vert1\rangle\right) \end{aligned} 4. The circuit that implements QFT The circuit that implements QFT makes use of two gates. The first one is a single-qubit Hadamard gate, $H$, that you already know. From the discussion in [Example 1](example1) above, you have already seen that the action of $H$ on the single-qubit state $\vert x_k\rangle$ is$$H\vert x_k \rangle = \vert0\rangle + \exp\left(\frac{2\pi i}{2}x_k\right)\vert1\rangle$$The second is a two-qubit controlled rotation $CROT_k$ given in block-diagonal form as $$CROT_k = \left[\begin{matrix}I&0\\0&UROT_k\\\end{matrix}\right]$$where $$UROT_k = \left[\begin{matrix}1&0\\0&\exp\left(\frac{2\pi i}{2^k}\right)\\\end{matrix}\right]$$The action of $CROT_k$ on the two-qubit state $\vert x_jx_k\rangle$ where the first qubit is the control and the second is the target is given by$$CROT_k\vert x_j0\rangle = \vert x_j0\rangle$$and$$CROT_k\vert x_j1\rangle = \exp\left( \frac{2\pi i}{2^k}x_j \right)\vert x_j1\rangle$$Given these two gates, a circuit that implements [an n-qubit QFT](qfteqn) is shown below.The circuit operates as follows. We start with an n-qubit input state $\vert x_1x_2\ldots x_n\rangle$. After the first Hadamard gate on qubit 1, the state is transformed from the input state to $$H_1\vert x_1x_2\ldots x_n\rangle = \frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the $CROT_2$ gate on qubit 1 controlled by qubit 2, the state is transformed to$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the application of the last $CROT_n$ gate on qubit 1 controlled by qubit $n$, the state becomes$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x_n + \frac{2\pi i}{2^{n-1}}x_{n-1} + \ldots + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_1\right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$Noting that $$x = 2^{n-1}x_1 + 2^{n-2}x_2 + \ldots + 2^1x_{n-1} + 2^0x_n$$we can write the above state as $$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x \right)\vert1\rangle\right]\otimes\vert x_2x_3\ldots x_n\rangle$$ After the application of a similar sequence of gates for qubits $2\ldots n$, we find the final state to be$$\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^n}x \right)\vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{n-1}}x \right)\vert1\rangle\right]\otimes\ldots\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{2}}x \right)\vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^{1}}x \right)\vert1\rangle\right]$$which is exactly the QFT of the input state as derived above with the caveat that the order of the qubits is reversed in the output state. 5. Example 2: 3-qubit QFT The steps to creating the circuit for $\vert y_1y_2y_3\rangle = QFT_8\vert x_1x_2x_3\rangle$ would be: Apply a Hadamard gate to $\vert x_3 \rangle$$$\psi_1 = \vert x_1\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a $CROT_2$ gate to $\vert x_3\rangle$ depending on $\vert x_2\rangle$$$\psi_2 = \vert x_1\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a $CROT_3$ gate to $\vert x_3\rangle$ depending on $\vert x_1\rangle$$$\psi_3 = \vert x_1\rangle\otimes\vert x_2\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a Hadamard gate to $\vert x_2 \rangle$$$\psi_4 = \vert x_1\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a $CROT_2$ gate to $\vert x_2\rangle$ depending on $\vert x_1\rangle$$$\psi_5 = \vert x_1\rangle\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_1 + \frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Apply a Hadamard gate to $\vert x_1\rangle$$$\psi_6 = \frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2}x_1\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^2}x_1 + \frac{2\pi i}{2}x_2\right) \vert1\rangle\right]\otimes\frac{1}{\sqrt{2}}\left[\vert0\rangle + \exp\left(\frac{2\pi i}{2^3}x_1 + \frac{2\pi i}{2^2}x_2 + \frac{2\pi i}{2}x_3\right) \vert1\rangle\right]$$ Keep in mind the reverse order of the output state relative to the desired QFT. Therefore, measure the bits in reverse order, that is $y_3 = x_1, y_2 = x_2, y_1 = y_3$. 6. A note about the form of the QFT circuit The example above demonstrates a very useful form of the QFT for $N=2^n$. Note that only the last qubit depends on the values of all the other input qubits and each further bit depends less and less on the input qubits. This becomes important in physical implementations of the QFT, where nearest-neighbor couplings are easier to achieve than distant couplings between qubits. 7. Qiskit ImplementationIn Qiskit, the implementation of the $CROT$ gate used in the discussion above is a controlled phase rotation gate. This gate is defined in [OpenQASM](https://github.com/QISKit/openqasm) as$$CU_1(\theta) =\begin{bmatrix} 1 & 0 & 0 & 0 \\ 0 & 1 & 0 & 0 \\ 0 & 0 & 1 & 0 \\ 0 & 0 & 0 & e^{i\theta}\end{bmatrix}$$Hence, the mapping from the $CROT_k$ gate in the discussion above into the $CU_1$ gate is found from the equation$$\theta = 2\pi/2^k = \pi/2^{k-1}$$It is instructive to write out the relevant code for the 3-qubit case before generalizing to the $n$-qubit case. In Qiskit, it is:```q = QuantumRegister(3)c = ClassicalRegister(3)qft3 = QuantumCircuit(q, c)qft3.h(q[0])qft3.cu1(math.pi/2.0, q[1], q[0]) CROT_2 from q[1] to q[0]qft3.cu1(math.pi/4.0, q[2], q[0]) CROT_3 from q[2] to q[0]qft3.h(q[1])qft3.cu1(math.pi/2.0, q[2], q[1]) CROT_2 from q[2] to q[1]qft3.h(q[2])```Following the above example, the case for $n$ qubits can be generalized as:```def qft(circ, q, n): """n-qubit QFT on q in circ.""" for j in range(n): circ.h(q[j]) for k in range(j+1,n): circ.cu1(math.pi/float(2**(k-j)), q[k], q[j])``` We will now implement the three-qubit QFT as discussed above. We first create a state whose QFT is known. The output after a QFT is applied to this special state is $\vert001\rangle$.
###Code
import math
# importing Qiskit
from qiskit import Aer, IBMQ
from qiskit import QuantumRegister, ClassicalRegister, QuantumCircuit, execute
from qiskit.providers.ibmq import least_busy
from qiskit.tools.monitor import job_monitor
from qiskit.tools.visualization import plot_histogram
IBMQ.load_account()
###Output
_____no_output_____
###Markdown
First let's define the QFT function, as well as a function that creates a state from which a QFT will return 001:
###Code
def input_state(circ, q, n):
"""n-qubit input state for QFT that produces output 1."""
for j in range(n):
circ.h(q[j])
circ.u1(-math.pi/float(2**(j)), q[j])
def qft(circ, q, n):
"""n-qubit QFT on q in circ."""
for j in range(n):
circ.h(q[j])
for k in range(j+1,n):
circ.cu1(math.pi/float(2**(k-j)), q[k], q[j])
circ.barrier()
###Output
_____no_output_____
###Markdown
Let's now implement a QFT on a prepared three qubit input state that should return $001$:
###Code
q = QuantumRegister(3, 'x')
c = ClassicalRegister(3, 'c')
qft3 = QuantumCircuit(q, c)
# first, prepare the state that should return 001 and draw that circuit
input_state(qft3, q, 3)
qft3.draw(output='mpl')
# next, do a qft on the prepared state and draw the entire circuit
qft(qft3, q, 3)
for i in range(3):
qft3.measure(q[i], c[i])
qft3.draw(output='mpl')
###Output
_____no_output_____
###Markdown
7a. Running QFT on a simulator
###Code
# run on local simulator
backend = Aer.get_backend("qasm_simulator")
simulate = execute(qft3, backend=backend, shots=1024).result()
simulate.get_counts()
###Output
_____no_output_____
###Markdown
We indeed see that the outcome is always $001$ when we execute the code on the simulator. 7b. Running QFT on a real quantum device We then see how the same circuit can be executed on real-device backends.
###Code
# Use the IBMQ Vigo device with 5 qubits
provider = IBMQ.get_provider(hub='ibm-q')
backend = provider.get_backend('ibmq_vigo')
shots = 2048
job_exp = execute(qft3, backend=backend, shots=shots)
job_monitor(job_exp)
results = job_exp.result()
plot_histogram(results.get_counts())
###Output
_____no_output_____
###Markdown
We see that the highest probability outcome is still $001$ when we execute the code on a real device. 8. Problems 1. The [above implementation](implementation) of QFT was tested by using a special input state for which QFT(input state) = 001. Implement an input state for which QFT(input state) = 100.2. The [above implementation](implementation) of QFT was tested by using a special input state for which QFT(input state) = 001. Implement an input state for which QFT(input state) = 101. 9. References 1. M. Nielsen and I. Chuang, Quantum Computation and Quantum Information, Cambridge Series on Information and the Natural Sciences (Cambridge University Press, Cambridge, 2000).
###Code
import qiskit
qiskit.__qiskit_version__
###Output
_____no_output_____ |
notebooks/kmeans_mnmg_demo.ipynb | ###Markdown
K-Means Multi-Node Multi-GPU (MNMG) DemoK-Means multi-Node multi-GPU implementation leverages Dask to spread data and computations across multiple workers. cuML uses One Process Per GPU (OPG) layout, which maps a single Dask worker to each GPU.The main difference between cuML's MNMG implementation of k-means and the single-GPU is that the fit can be performed in parallel for each iteration, sharing only the centroids between iterations. The MNMG version also provides the same scalable k-means++ initialization algorithm as the single-GPU version.Unlike the single-GPU implementation, The MNMG k-means API requires a Dask cuDF Dataframe as input. `predict()` and `transform()` also return a Dask cuDF Dataframe. The Dask cuDF Dataframe API is very similar to the Dask DataFrame API, but underlying Dataframes are cuDF, rather than Pandas.For information about cuDF, refer to the [cuDF documentation](https://docs.rapids.ai/api/cudf/stable).For additional information on cuML's k-means implementation: https://docs.rapids.ai/api/cuml/stable/api.htmlcuml.dask.cluster.KMeans. Imports
###Code
from cuml.dask.cluster.kmeans import KMeans as cuKMeans
from cuml.dask.common import to_dask_df
from cuml.dask.datasets import make_blobs
from cuml.metrics import adjusted_rand_score
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
from dask_ml.cluster import KMeans as skKMeans
###Output
_____no_output_____
###Markdown
Start Dask ClusterWe can use the `LocalCUDACluster` to start a Dask cluster on a single machine with one worker mapped to each GPU. This is called one-process-per-GPU (OPG).
###Code
cluster = LocalCUDACluster(threads_per_worker=1)
client = Client(cluster)
###Output
_____no_output_____
###Markdown
Define Parameters
###Code
n_samples = 1000000
n_features = 2
n_total_partitions = len(list(client.has_what().keys()))
###Output
_____no_output_____
###Markdown
Generate Data DeviceWe can generate a dask_cudf.DataFrame of synthetic data for multiple clusters using `cuml.dask.datasets.make_blobs`.
###Code
X_cudf, Y_cudf = make_blobs(n_samples,
n_features,
centers = 5,
n_parts = n_total_partitions,
cluster_std=0.1,
verbose=True)
###Output
_____no_output_____
###Markdown
HostWe use `cuml.dask.common.to_dask_df` to convert a dask_cuml.DataFrame using device memory into a dask.DataFrame containing Pandas in host memory.
###Code
wait(X_cudf)
X_df = to_dask_df(X_cudf)
###Output
_____no_output_____
###Markdown
Scikit-learn model Fit and predictSince a scikit-learn equivalent to the multi-node multi-GPU K-means in cuML doesn't exist, we will use Dask-ML's implementation for comparison.
###Code
%%time
kmeans_sk = skKMeans(init="k-means||",
n_clusters=5,
n_jobs=-1,
random_state=100)
kmeans_sk.fit(X_df)
%%time
labels_sk = kmeans_sk.predict(X_df).compute()
###Output
_____no_output_____
###Markdown
cuML Model Fit and predict
###Code
%%time
kmeans_cuml = cuKMeans(init="k-means||",
n_clusters=5,
random_state=100)
kmeans_cuml.fit(X_cudf)
%%time
labels_cuml = kmeans_cuml.predict(X_cudf).compute()
###Output
_____no_output_____
###Markdown
Compare Results
###Code
score = adjusted_rand_score(labels_sk, labels_cuml.to_pandas().values)
passed = score == 1.0
print('compare kmeans: cuml vs sklearn labels_ are ' + ('equal' if passed else 'NOT equal'))
###Output
_____no_output_____
###Markdown
K-Means Multi-Node Multi-GPU (MNMG) DemoK-Means multi-Node multi-GPU implementation leverages Dask to spread data and computations across multiple workers. cuML uses One Process Per GPU (OPG) layout, which maps a single Dask worker to each GPU.The main difference between cuML's MNMG implementation of k-means and the single-GPU is that the fit can be performed in parallel for each iteration, sharing only the centroids between iterations. The MNMG version also provides the same scalable k-means++ initialization algorithm as the single-GPU version.Unlike the single-GPU implementation, The MNMG k-means API requires a Dask Dataframe or Array as input. `predict()` and `transform()` return the same type as input. The Dask cuDF Dataframe API is very similar to the Dask DataFrame API, but underlying Dataframes are cuDF, rather than Pandas. Dask cuPy arrays are also available.For information about cuDF, refer to the [cuDF documentation](https://docs.rapids.ai/api/cudf/stable).For additional information on cuML's k-means implementation: https://docs.rapids.ai/api/cuml/stable/api.htmlcuml.dask.cluster.KMeans. Imports
###Code
from cuml.dask.cluster.kmeans import KMeans as cuKMeans
from cuml.dask.common import to_dask_df
from cuml.dask.datasets import make_blobs
from cuml.metrics import adjusted_rand_score
from dask.distributed import Client, wait
from dask_cuda import LocalCUDACluster
from dask_ml.cluster import KMeans as skKMeans
import cupy as cp
###Output
_____no_output_____
###Markdown
Start Dask ClusterWe can use the `LocalCUDACluster` to start a Dask cluster on a single machine with one worker mapped to each GPU. This is called one-process-per-GPU (OPG).
###Code
cluster = LocalCUDACluster(threads_per_worker=1)
client = Client(cluster)
###Output
_____no_output_____
###Markdown
Define Parameters
###Code
n_samples = 1000000
n_features = 2
n_total_partitions = len(list(client.has_what().keys()))
###Output
_____no_output_____
###Markdown
Generate Data DeviceWe can generate a Dask cuPY Array of synthetic data for multiple clusters using `cuml.dask.datasets.make_blobs`.
###Code
X_dca, Y_dca = make_blobs(n_samples,
n_features,
centers = 5,
n_parts = n_total_partitions,
cluster_std=0.1,
verbose=True)
###Output
_____no_output_____
###Markdown
HostWe collect the Dask cuPy Array on a single node as a cuPy array. Then we transfer the cuPy array from device to host memory into a Numpy array.
###Code
X_cp = X_dca.compute()
X_np = cp.asnumpy(X_cp)
del X_cp
###Output
_____no_output_____
###Markdown
Scikit-learn model Fit and predictSince a scikit-learn equivalent to the multi-node multi-GPU K-means in cuML doesn't exist, we will use Dask-ML's implementation for comparison.
###Code
%%time
kmeans_sk = skKMeans(init="k-means||",
n_clusters=5,
n_jobs=-1,
random_state=100)
kmeans_sk.fit(X_np)
%%time
labels_sk = kmeans_sk.predict(X_np).compute()
###Output
_____no_output_____
###Markdown
cuML Model Fit and predict
###Code
%%time
kmeans_cuml = cuKMeans(init="k-means||",
n_clusters=5,
random_state=100)
kmeans_cuml.fit(X_dca)
%%time
labels_cuml = kmeans_cuml.predict(X_dca).compute()
###Output
_____no_output_____
###Markdown
Compare Results
###Code
score = adjusted_rand_score(labels_sk, labels_cuml)
passed = score == 1.0
print('compare kmeans: cuml vs sklearn labels_ are ' + ('equal' if passed else 'NOT equal'))
###Output
_____no_output_____ |
notebooks/Transfer_Learning_Demo.ipynb | ###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 3.0`: transfer learning mode demo The *transfer learning* mode can be used for either 1. Initial training of the Agent - where a newly built agent is trained from scratch while iterating through sufficiently large datasets over many epochs 2. Focusing of pre-trained Agent - where an already pre-trained agent is introduced to a small dataset for a small number of epochs.In this notebook we are going to illustrate the second scenario. The small dataset can consist of a few hundred molecules that normally share same features/scaffolds. The purpose of `Focusing` is to "learn" the common patterns/scaffolds in the structures. The `Focused` Agent will start producing with higher probablility the molecules with the common scaffolds. The `Focused` Agent can be used directly for *reinforcement learning* thus having as a starting point the small chemical space it has been focused on.
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/reinventcli")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent.v3.0")
output_dir = os.path.expanduser("~/Desktop/REINVENT_transfer_learning_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
Setting up the configuration`REINVENT` has an entry point that loads a specified `JSON` file on startup. `JSON` is a low-level data format that allows to specify a fairly large number of parameters in a cascading fashion very quickly. The parameters are structured into *blocks* which can in turn contain blocks or simple values, such as *True* or *False*, strings and numbers. In this tutorial, we will go through the different blocks step-by-step, explaining their purpose and potential values for given parameters. Note, that while we will write out the configuration as a `JSON` file in the end, in `python` we handle the same information as a simple `dict`.
###Code
# initialize the dictionary
configuration = {
"version": 3, # we are going to use REINVENT's newest release
"run_type": "transfer_learning" # other run types: "scoring", "validation",
# "transfer_learning",
# "reinforcement_learning" and
# "create_model"
}
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://127.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_path": os.path.join(output_dir, "progress.log"), # where the run's output is stored
"job_name": "Transfer Learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to "remote"
}
###Output
_____no_output_____
###Markdown
We will need to specify a path to an agent (parameter `model_path`), which can be a prior or trained agent. For the purpose of this notebook, we will use a prior shipped with the `REINVENT 3.0` repository. The code block below will define the settings for `adaptive_lr_config` property of the configuration. These parameters are defining the behavior of the learning rate. Note that the mode is set to `"constant"`. We recommend the default values as they dont play significant role for the purpose of focusing the agent.
###Code
adaptive_lr_config = {
"mode": "constant", # other modes: "exponential", "adaptive", "constant"
"gamma": 0.8,
"step": 1,
"start": 5E-4,
"min": 1E-5,
"threshold": 1E-4,
"average_steps": 4,
"patience": 8,
"restart_value": 1E-5,
"sample_size": 10000,
"restart_times": 0
}
output_model_path = os.path.join(output_dir, "focused.agent") \
# The final focused agent will be named "focused.agent"
# The intermediate steps will be named "focused.agent.1", "focused.agent.2", "focused.agent.3" and etc.
# add the "parameters" block
configuration["parameters"] = {
"input_model_path": os.path.join(ipynb_path, # path to prior or trained agent
"models",
"random.prior.new"),
"output_model_path": output_model_path, # location to store the focused agent
"input_smiles_path": os.path.join(ipynb_path, # path to input smiles
"data", # this is a dummy dataset consisting only of celecoxib
"smiles.smi"),
"save_every_n_epochs": 1, # how often to save the focused Agent. Here it's stored after each epoch
"batch_size": 128, # batch size the input data
"num_epochs": 10, # number of epochs to focus the agent for
"standardize": True, # the input may contain SMILES strings that are invalid according to the agent
# this atempts to clean up the input dataset
"randomize": True, # this triggers data augmentation which is quite important for small datasets
"adaptive_lr_config": adaptive_lr_config # setting the learning rate behavior
}
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "transfer_learning_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
Run `REINVENT`Now it is time to execute `REINVENT` locally. The command-line execution looks like this:``` activate envionmentconda activate reinvent.v3.0 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!{reinvent_env}/bin/python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
###Output
_____no_output_____
###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 3.0`: transfer learning mode demo The *transfer learning* mode can be used for either 1. Initial training of the Agent - where a newly built agent is trained from scratch while iterating through sufficiently large datasets over many epochs 2. Focusing of pre-trained Agent - where an already pre-trained agent is introduced to a small dataset for a small number of epochs.In this notebook we are going to illustrate the second scenario. The small dataset can consist of a few hundred molecules that normally share same features/scaffolds. The purpose of `Focusing` is to "learn" the common patterns/scaffolds in the structures. The `Focused` Agent will start producing with higher probablility the molecules with the common scaffolds. The `Focused` Agent can be used directly for *reinforcement learning* thus having as a starting point the small chemical space it has been focused on.
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent.v3.0")
output_dir = os.path.expanduser("~/Desktop/REINVENT_transfer_learning_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
Setting up the configuration`REINVENT` has an entry point that loads a specified `JSON` file on startup. `JSON` is a low-level data format that allows to specify a fairly large number of parameters in a cascading fashion very quickly. The parameters are structured into *blocks* which can in turn contain blocks or simple values, such as *True* or *False*, strings and numbers. In this tutorial, we will go through the different blocks step-by-step, explaining their purpose and potential values for given parameters. Note, that while we will write out the configuration as a `JSON` file in the end, in `python` we handle the same information as a simple `dict`.
###Code
# initialize the dictionary
configuration = {
"version": 3, # we are going to use REINVENT's newest release
"run_type": "transfer_learning" # other run types: "scoring", "validation",
# "transfer_learning",
# "reinforcement_learning" and
# "create_model"
}
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://127.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_path": os.path.join(output_dir, "progress.log"), # where the run's output is stored
"job_name": "Transfer Learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to "remote"
}
###Output
_____no_output_____
###Markdown
We will need to specify a path to an agent (parameter `model_path`), which can be a prior or trained agent. For the purpose of this notebook, we will use a prior shipped with the `REINVENT 3.0` repository. The code block below will define the settings for `adaptive_lr_config` property of the configuration. These parameters are defining the behavior of the learning rate. Note that the mode is set to `"constant"`. We recommend the default values as they dont play significant role for the purpose of focusing the agent.
###Code
adaptive_lr_config = {
"mode": "constant", # other modes: "exponential", "adaptive", "constant"
"gamma": 0.8,
"step": 1,
"start": 5E-4,
"min": 1E-5,
"threshold": 1E-4,
"average_steps": 4,
"patience": 8,
"restart_value": 1E-5,
"sample_size": 10000,
"restart_times": 0
}
output_model_path = os.path.join(output_dir, "focused.agent") \
# The final focused agent will be named "focused.agent"
# The intermediate steps will be named "focused.agent.1", "focused.agent.2", "focused.agent.3" and etc.
# add the "parameters" block
configuration["parameters"] = {
"input_model_path": os.path.join(ipynb_path, # path to prior or trained agent
"models",
"random.prior.new"),
"output_model_path": output_model_path, # location to store the focused agent
"input_smiles_path": os.path.join(ipynb_path, # path to input smiles
"data", # this is a dummy dataset consisting only of celecoxib
"smiles.smi"),
"save_every_n_epochs": 1, # how often to save the focused Agent. Here it's stored after each epoch
"batch_size": 128, # batch size the input data
"num_epochs": 10, # number of epochs to focus the agent for
"standardize": True, # the input may contain SMILES strings that are invalid according to the agent
# this atempts to clean up the input dataset
"randomize": True, # this triggers data augmentation which is quite important for small datasets
"adaptive_lr_config": adaptive_lr_config # setting the learning rate behavior
}
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "transfer_learning_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
Run `REINVENT`Now it is time to execute `REINVENT` locally. The command-line execution looks like this:``` activate envionmentconda activate reinvent.v3.0 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!{reinvent_env}/bin/python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
###Output
_____no_output_____
###Markdown
> **How to run this notebook (command-line)?**1. Install the `reinvent_shared.v2.1` environment:`conda env create -f reinvent_shared.yml`2. Activate the environment:`conda activate reinvent_shared.v2.1`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 2.0`: transfer learning mode demo The *transfer learning* mode can be used for either 1. Initial training of the Agent - where a newly built agent is trained from scratch while iterating through sufficiently large datasets over many epochs 2. Focusing of pre-trained Agent - where an already pre-trained agent is introduced to a small dataset for a small number of epochs.In this notebook we are going to illustrate the second scenario. The small dataset can consist of a few hundred molecules that normally share same features/scaffolds. The purpose of `Focusing` is to "learn" the common patterns/scaffolds in the structures. The `Focused` Agent will start producing with higher probablility the molecules with the common scaffolds. The `Focused` Agent can be used directly for *reinforcement learning* thus having as a starting point the small chemical space it has been focused on.
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Projects/Publications/2020/2020-04_REINVENT_2.0/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent_shared.v2.1")
output_dir = os.path.expanduser("~/Desktop/REINVENT_transfer_learning_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
Setting up the configuration`REINVENT` has an entry point that loads a specified `JSON` file on startup. `JSON` is a low-level data format that allows to specify a fairly large number of parameters in a cascading fashion very quickly. The parameters are structured into *blocks* which can in turn contain blocks or simple values, such as *True* or *False*, strings and numbers. In this tutorial, we will go through the different blocks step-by-step, explaining their purpose and potential values for given parameters. Note, that while we will write out the configuration as a `JSON` file in the end, in `python` we handle the same information as a simple `dict`.
###Code
# initialize the dictionary
configuration = {
"version": 2, # we are going to use REINVENT's newest release
"run_type": "transfer_learning" # other run types: "scoring", "validation",
# "transfer_learning",
# "reinforcement_learning" and
# "create_model"
}
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://127.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_path": os.path.join(output_dir, "progress.log"), # where the run's output is stored
"job_name": "Transfer Learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to "remote"
}
###Output
_____no_output_____
###Markdown
We will need to specify a path to an agent (parameter `model_path`), which can be a prior or trained agent. For the purpose of this notebook, we will use a prior shipped with the `REINVENT 2.0` repository. The code block below will define the settings for `adaptive_lr_config` property of the configuration. These parameters are defining the behavior of the learning rate. Note that the mode is set to `"constant"`. We recommend the default values as they dont play significant role for the purpose of focusing the agent.
###Code
adaptive_lr_config = {
"mode": "constant", # other modes: "exponential", "adaptive", "constant"
"gamma": 0.8,
"step": 1,
"start": 5E-4,
"min": 1E-5,
"threshold": 1E-4,
"average_steps": 4,
"patience": 8,
"restart_value": 1E-5,
"sample_size": 10000,
"restart_times": 0
}
output_model_path = os.path.join(output_dir, "focused.agent") \
# The final focused agent will be named "focused.agent"
# The intermediate steps will be named "focused.agent.1", "focused.agent.2", "focused.agent.3" and etc.
# add the "parameters" block
configuration["parameters"] = {
"input_model_path": os.path.join(reinvent_dir, # path to prior or trained agent
"data",
"augmented.prior"),
"output_model_path": output_model_path, # location to store the focused agent
"input_smiles_path": os.path.join(reinvent_dir, # path to input smiles
"data", # this is a dummy dataset consisting only of celecoxib
"smiles.smi"),
"save_every_n_epochs": 1, # how often to save the focused Agent. Here its stored after each epoch
"batch_size": 128, # batch size the input data
"num_epochs": 10, # number of epochs to focus the agent for
"standardize": True, # the input may contain SMILES strings that are invalid according to the agent
# this atempts to clean up the input dataset
"randomize": True, # this triggers data augmentation which is quite important for small datasets
"adaptive_lr_config": adaptive_lr_config # setting the learning rate behavior
}
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "transfer_learning_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
Run `REINVENT`Now it is time to execute `REINVENT` locally. The command-line execution looks like this:``` activate envionmentconda activate reinvent_shared.v2.1 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
###Output
_____no_output_____
###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 3.0`: transfer learning mode demo The *transfer learning* mode can be used for either 1. Initial training of the Agent - where a newly built agent is trained from scratch while iterating through sufficiently large datasets over many epochs 2. Focusing of pre-trained Agent - where an already pre-trained agent is introduced to a small dataset for a small number of epochs.In this notebook we are going to illustrate the second scenario. The small dataset can consist of a few hundred molecules that normally share same features/scaffolds. The purpose of `Focusing` is to "learn" the common patterns/scaffolds in the structures. The `Focused` Agent will start producing with higher probablility the molecules with the common scaffolds. The `Focused` Agent can be used directly for *reinforcement learning* thus having as a starting point the small chemical space it has been focused on.
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Projects/Publications/2020/2020-04_REINVENT_2.0/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent_shared.v2.1")
output_dir = os.path.expanduser("~/Desktop/REINVENT_transfer_learning_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
Setting up the configuration`REINVENT` has an entry point that loads a specified `JSON` file on startup. `JSON` is a low-level data format that allows to specify a fairly large number of parameters in a cascading fashion very quickly. The parameters are structured into *blocks* which can in turn contain blocks or simple values, such as *True* or *False*, strings and numbers. In this tutorial, we will go through the different blocks step-by-step, explaining their purpose and potential values for given parameters. Note, that while we will write out the configuration as a `JSON` file in the end, in `python` we handle the same information as a simple `dict`.
###Code
# initialize the dictionary
configuration = {
"version": 3, # we are going to use REINVENT's newest release
"run_type": "transfer_learning" # other run types: "scoring", "validation",
# "transfer_learning",
# "reinforcement_learning" and
# "create_model"
}
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://127.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_path": os.path.join(output_dir, "progress.log"), # where the run's output is stored
"job_name": "Transfer Learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to "remote"
}
###Output
_____no_output_____
###Markdown
We will need to specify a path to an agent (parameter `model_path`), which can be a prior or trained agent. For the purpose of this notebook, we will use a prior shipped with the `REINVENT 3.0` repository. The code block below will define the settings for `adaptive_lr_config` property of the configuration. These parameters are defining the behavior of the learning rate. Note that the mode is set to `"constant"`. We recommend the default values as they dont play significant role for the purpose of focusing the agent.
###Code
adaptive_lr_config = {
"mode": "constant", # other modes: "exponential", "adaptive", "constant"
"gamma": 0.8,
"step": 1,
"start": 5E-4,
"min": 1E-5,
"threshold": 1E-4,
"average_steps": 4,
"patience": 8,
"restart_value": 1E-5,
"sample_size": 10000,
"restart_times": 0
}
output_model_path = os.path.join(output_dir, "focused.agent") \
# The final focused agent will be named "focused.agent"
# The intermediate steps will be named "focused.agent.1", "focused.agent.2", "focused.agent.3" and etc.
# add the "parameters" block
configuration["parameters"] = {
"input_model_path": os.path.join(ipynb_path, # path to prior or trained agent
"models",
"augmented.prior"),
"output_model_path": output_model_path, # location to store the focused agent
"input_smiles_path": os.path.join(ipynb_path, # path to input smiles
"data", # this is a dummy dataset consisting only of celecoxib
"smiles.smi"),
"save_every_n_epochs": 1, # how often to save the focused Agent. Here its stored after each epoch
"batch_size": 128, # batch size the input data
"num_epochs": 10, # number of epochs to focus the agent for
"standardize": True, # the input may contain SMILES strings that are invalid according to the agent
# this atempts to clean up the input dataset
"randomize": True, # this triggers data augmentation which is quite important for small datasets
"adaptive_lr_config": adaptive_lr_config # setting the learning rate behavior
}
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "transfer_learning_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
Run `REINVENT`Now it is time to execute `REINVENT` locally. The command-line execution looks like this:``` activate envionmentconda activate reinvent.v3.0 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
###Output
_____no_output_____
###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 2.0`: transfer learning mode demo The *transfer learning* mode can be used for either 1. Initial training of the Agent - where a newly built agent is trained from scratch while iterating through sufficiently large datasets over many epochs 2. Focusing of pre-trained Agent - where an already pre-trained agent is introduced to a small dataset for a small number of epochs.In this notebook we are going to illustrate the second scenario. The small dataset can consist of a few hundred molecules that normally share same features/scaffolds. The purpose of `Focusing` is to "learn" the common patterns/scaffolds in the structures. The `Focused` Agent will start producing with higher probablility the molecules with the common scaffolds. The `Focused` Agent can be used directly for *reinforcement learning* thus having as a starting point the small chemical space it has been focused on.
###Code
# load dependencies
import os
import re
import json
import tempfile
# --------- change these path variables as required
reinvent_dir = os.path.expanduser("~/Desktop/Projects/Publications/2020/2020-04_REINVENT_2.0/Reinvent")
reinvent_env = os.path.expanduser("~/miniconda3/envs/reinvent_shared.v2.1")
output_dir = os.path.expanduser("~/Desktop/REINVENT_transfer_learning_demo")
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
Setting up the configuration`REINVENT` has an entry point that loads a specified `JSON` file on startup. `JSON` is a low-level data format that allows to specify a fairly large number of parameters in a cascading fashion very quickly. The parameters are structured into *blocks* which can in turn contain blocks or simple values, such as *True* or *False*, strings and numbers. In this tutorial, we will go through the different blocks step-by-step, explaining their purpose and potential values for given parameters. Note, that while we will write out the configuration as a `JSON` file in the end, in `python` we handle the same information as a simple `dict`.
###Code
# initialize the dictionary
configuration = {
"version": 2, # we are going to use REINVENT's newest release
"run_type": "transfer_learning" # other run types: "scoring", "validation",
# "transfer_learning",
# "reinforcement_learning" and
# "create_model"
}
# add block to specify whether to run locally or not and
# where to store the results and logging
configuration["logging"] = {
"sender": "http://127.0.0.1", # only relevant if "recipient" is set to "remote"
"recipient": "local", # either to local logging or use a remote REST-interface
"logging_path": os.path.join(output_dir, "progress.log"), # where the run's output is stored
"job_name": "Transfer Learning demo", # set an arbitrary job name for identification
"job_id": "demo" # only relevant if "recipient" is set to "remote"
}
###Output
_____no_output_____
###Markdown
We will need to specify a path to an agent (parameter `model_path`), which can be a prior or trained agent. For the purpose of this notebook, we will use a prior shipped with the `REINVENT 2.0` repository. The code block below will define the settings for `adaptive_lr_config` property of the configuration. These parameters are defining the behavior of the learning rate. Note that the mode is set to `"constant"`. We recommend the default values as they dont play significant role for the purpose of focusing the agent.
###Code
adaptive_lr_config = {
"mode": "constant", # other modes: "exponential", "adaptive", "constant"
"gamma": 0.8,
"step": 1,
"start": 5E-4,
"min": 1E-5,
"threshold": 1E-4,
"average_steps": 4,
"patience": 8,
"restart_value": 1E-5,
"sample_size": 10000,
"restart_times": 0
}
output_model_path = os.path.join(output_dir, "focused.agent") \
# The final focused agent will be named "focused.agent"
# The intermediate steps will be named "focused.agent.1", "focused.agent.2", "focused.agent.3" and etc.
# add the "parameters" block
configuration["parameters"] = {
"input_model_path": os.path.join(reinvent_dir, # path to prior or trained agent
"data",
"augmented.prior"),
"output_model_path": output_model_path, # location to store the focused agent
"input_smiles_path": os.path.join(reinvent_dir, # path to input smiles
"data", # this is a dummy dataset consisting only of celecoxib
"smiles.smi"),
"save_every_n_epochs": 1, # how often to save the focused Agent. Here its stored after each epoch
"batch_size": 128, # batch size the input data
"num_epochs": 10, # number of epochs to focus the agent for
"standardize": True, # the input may contain SMILES strings that are invalid according to the agent
# this atempts to clean up the input dataset
"randomize": True, # this triggers data augmentation which is quite important for small datasets
"adaptive_lr_config": adaptive_lr_config # setting the learning rate behavior
}
# write the configuration file to the disc
configuration_JSON_path = os.path.join(output_dir, "transfer_learning_config.json")
with open(configuration_JSON_path, 'w') as f:
json.dump(configuration, f, indent=4, sort_keys=True)
###Output
_____no_output_____
###Markdown
Run `REINVENT`Now it is time to execute `REINVENT` locally. The command-line execution looks like this:``` activate envionmentconda activate reinvent_shared.v2.1 execute REINVENTpython /input.py .json```
###Code
%%capture captured_err_stream --no-stderr
# execute REINVENT from the command-line
!python {reinvent_dir}/input.py {configuration_JSON_path}
# print the output to a file, just to have it for documentation
with open(os.path.join(output_dir, "run.err"), 'w') as file:
file.write(captured_err_stream.stdout)
###Output
_____no_output_____ |
Assignment1/1_notmnist.ipynb | ###Markdown
Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
###Code
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import imageio
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matplotlib backend as plotting inline in IPython
%matplotlib inline
###Output
_____no_output_____
###Markdown
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labeled examples. Given these sizes, it should be possible to train models quickly on any machine.
###Code
url = 'https://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
data_root = '.' # Change me to store data elsewhere
def download_progress_hook(count, blockSize, totalSize):
"""A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 5% change in download progress.
"""
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
dest_filename = os.path.join(data_root, filename)
if force or not os.path.exists(dest_filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, dest_filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(dest_filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', dest_filename)
else:
raise Exception(
'Failed to verify ' + dest_filename + '. Can you get to it with a browser?')
return dest_filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
###Output
Found and verified ./notMNIST_large.tar.gz
Found and verified ./notMNIST_small.tar.gz
###Markdown
Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labeled A through J.
###Code
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall(data_root)
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
###Output
./notMNIST_large already present - Skipping extraction of ./notMNIST_large.tar.gz.
['./notMNIST_large/A', './notMNIST_large/B', './notMNIST_large/C', './notMNIST_large/D', './notMNIST_large/E', './notMNIST_large/F', './notMNIST_large/G', './notMNIST_large/H', './notMNIST_large/I', './notMNIST_large/J']
./notMNIST_small already present - Skipping extraction of ./notMNIST_small.tar.gz.
['./notMNIST_small/A', './notMNIST_small/B', './notMNIST_small/C', './notMNIST_small/D', './notMNIST_small/E', './notMNIST_small/F', './notMNIST_small/G', './notMNIST_small/H', './notMNIST_small/I', './notMNIST_small/J']
###Markdown
---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.---
###Code
import random
#Image(filename='test.png')
#Taken from https://stackoverflow.com/questions/36006136/how-to-display-images-in-a-row-with-ipython-display/38556650
from matplotlib.pyplot import figure, imshow, axis
from matplotlib.image import imread
def showImagesHorizontally(list_of_files):
fig = figure()
number_of_files = len(list_of_files)
for i in range(number_of_files):
a=fig.add_subplot(1,number_of_files,i+1) # nrows, ncols, index. So, add a subplot at that position.
image = imread(list_of_files[i]) # apparently you can read a list of files as one image?
imshow(image,cmap='Greys_r') # show image as grey
# imshow(image) # yup, looks odd if you don't use the cmap
axis('off') # turn off axis lines and labels.
#END COPIED CODE FROM https://stackoverflow.com/questions/36006136/how-to-display-images-in-a-row-with-ipython-display/38556650
exemplars_per_folder = 10 #note that the images are scaled to
exemplars_by_folder = []
for folder in test_folders:
filenames = os.listdir(folder)
folder_exemplars = []
for i in range(0, exemplars_per_folder):
file_choice = random.choice(filenames)
path_to_chosen_file = os.path.join(folder, file_choice)
folder_exemplars.append(path_to_chosen_file)
exemplars_by_folder.append(folder_exemplars)
for folder in exemplars_by_folder:
# using code from https://stackoverflow.com/questions/36006136/how-to-display-images-in-a-row-with-ipython-display/38556650
showImagesHorizontally(folder)
# the way I did it at first:
# for exemplar in folder:
# display(Image(exemplar))
###Output
_____no_output_____
###Markdown
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them.
###Code
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (imageio.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except (IOError, ValueError) as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
###Output
./notMNIST_large/A.pickle already present - Skipping pickling.
./notMNIST_large/B.pickle already present - Skipping pickling.
./notMNIST_large/C.pickle already present - Skipping pickling.
./notMNIST_large/D.pickle already present - Skipping pickling.
./notMNIST_large/E.pickle already present - Skipping pickling.
./notMNIST_large/F.pickle already present - Skipping pickling.
./notMNIST_large/G.pickle already present - Skipping pickling.
./notMNIST_large/H.pickle already present - Skipping pickling.
./notMNIST_large/I.pickle already present - Skipping pickling.
./notMNIST_large/J.pickle already present - Skipping pickling.
./notMNIST_small/A.pickle already present - Skipping pickling.
./notMNIST_small/B.pickle already present - Skipping pickling.
./notMNIST_small/C.pickle already present - Skipping pickling.
./notMNIST_small/D.pickle already present - Skipping pickling.
./notMNIST_small/E.pickle already present - Skipping pickling.
./notMNIST_small/F.pickle already present - Skipping pickling.
./notMNIST_small/G.pickle already present - Skipping pickling.
./notMNIST_small/H.pickle already present - Skipping pickling.
./notMNIST_small/I.pickle already present - Skipping pickling.
./notMNIST_small/J.pickle already present - Skipping pickling.
###Markdown
---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.---
###Code
#Colin Leong's solution to Problem 2.
# unpickling takes forever so I'm breaking it out into its own cell
def load_pickled_dataset(name_of_pickle_file):
pkl_file = open(name_of_pickle_file, 'rb')
data = pickle.load(pkl_file)
return data
def unpickle_datasets(datasets):
unpickled_datasets = []
for dataset in datasets:
print("unpickling dataset {}...".format(dataset))
data = load_pickled_dataset(dataset)
unpickled_datasets.append(data)
return unpickled_datasets
unpickled_train = unpickle_datasets(train_datasets)
unpickled_test = unpickle_datasets(test_datasets)
#Problem 2 solution, part 2: pick at random from unpickled data, and display.
def show_images(list_of_images):
fig = figure()
number_of_images = len(list_of_images)
for i in range(number_of_images):
a=fig.add_subplot(1,number_of_images,i+1) # nrows, ncols, index. So, add a subplot at that position.
image = list_of_images[i]
imshow(image,cmap='Greys_r') # show image as grey
axis('off') # turn off axis lines and labels.
plt.show()
def get_one_sample_from_each_dataset(datasets):
samples = []
for dataset in datasets:
pic = random.choice(dataset)
samples.append(pic)
return samples
train_samples = get_one_sample_from_each_dataset(unpickled_train)
test_samples = get_one_sample_from_each_dataset(unpickled_train)
# print(train_samples)
show_images(train_samples)
show_images(test_samples)
###Output
_____no_output_____
###Markdown
---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.---
###Code
# how the heck do I do that.
def get_some_exemplars(unpickled_class_dataset, number=5):
exemplars = []
for i in range(0,number):
exemplars.append(random.choice(unpickled_class_dataset))
return exemplars
def get_dataset_stats(unpickled_datasets, name):
for index, class_dataset in enumerate(unpickled_datasets):
class_exemplars = get_some_exemplars(class_dataset)
print("Getting examples for {0}, class #{1}".format(name, index))
print("Some examples of this class:")
show_images(class_exemplars)
print("for this class we have {0} data items\n\n".format(len(class_dataset)))
get_dataset_stats(unpickled_train, "train data")
get_dataset_stats(unpickled_test, "test data")
###Output
Getting examples for train data, class #0
Some examples of this class:
###Markdown
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning.
###Code
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
###Output
Training: (200000, 28, 28) (200000,)
Validation: (10000, 28, 28) (10000,)
Testing: (10000, 28, 28) (10000,)
###Markdown
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
###Code
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
###Output
_____no_output_____
###Markdown
---Problem 4---------Convince yourself that the data is still good after shuffling!---
###Code
#problem 4 solution:
label_to_char = {0:'a',1:'b',2:'c',3:'d',4:'e',5:'f',6:'g',7:'h',8:'i',9:'j'}
def labels_to_chars(labels):
label_chars =[]
for label in labels:
label_char = label_to_char[label]
label_chars.append(label_char)
return label_chars
def get_matching_items_randomly_from_two_lists(first, second, num_tuples=5):
#assuming the length is the same...
max_index = len(first)-1
#solution adapted from https://stackoverflow.com/questions/19485641/python-random-sample-of-two-arrays-but-matching-indices
idx = np.random.choice(np.arange(len(first)), num_tuples, replace=False)
# print("picking {0} random items using idx {1}".format(num_tuples, idx))
# print(type(idx))
first_samples = first[idx]
second_samples = second[idx]
return first_samples, second_samples
def check_after_shuffle(dataset_name, dataset_to_check, labels_to_check, num_samples=15):
imgs, labels = get_matching_items_randomly_from_two_lists(dataset_to_check, labels_to_check,num_samples)
label_chars = labels_to_chars(labels)
print("sample {0} labels:{1}".format(dataset_name,label_chars))
show_images(imgs)
num_samples = 12
check_after_shuffle("train", train_dataset, train_labels, num_samples)
check_after_shuffle("val", valid_dataset, valid_labels, num_samples)
check_after_shuffle("test", test_dataset, test_labels, num_samples)
###Output
sample train labels:['i', 'd', 'i', 'd', 'i', 'i', 'f', 'd', 'b', 'g', 'c', 'i']
###Markdown
Finally, let's save the data for later reuse:
###Code
pickle_file = os.path.join(data_root, 'notMNIST.pickle')
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
###Output
Compressed pickle size: 690800441
###Markdown
---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.---
###Code
# train_dataset = test_dataset
# valid_dataset = test_dataset
# Checking overlap: how the heck would I do this? Check filenames? Hashing?
def generate_hashes(list_of_numpy_arrays):
list_of_numpy_arrays.flags.writeable=False
hashes=[hash(item.data) for item in list_of_numpy_arrays]
return hashes
def check_within_dataset(hashes, hashes_set, name):
hashes_len = len(hashes)
set_len = len(hashes_set)
diff = hashes_len-set_len
print("within {0} dataset, there are {1} items, but only {2} unique items, which works out to {3} items that are repeats".format(name, hashes_len, set_len, diff))
return diff
def check_intersections(set1, set2, name1, name2):
intersections=set1.intersection(set2)
print("between {0} and {1} there are {2} unique items that are in both at least once".format(name1,name2,len(intersections)))
return intersections
train_hashes = generate_hashes(train_dataset)
valid_hashes = generate_hashes(valid_dataset)
test_hashes = generate_hashes(test_dataset)
# train_hashes = [1, 2, 3]
# valid_hashes = [1, 2, 3]
# test_hashes = [1, 2, 3, 3]
train_hashes_set=set(train_hashes)
train_repeats = check_within_dataset(train_hashes,train_hashes_set, "train")
valid_hashes_set = set(valid_hashes)
valid_repeats = check_within_dataset(valid_hashes,valid_hashes_set, "valid")
test_hashes_set=set(test_hashes)
test_repeats = check_within_dataset(test_hashes,test_hashes_set, "test")
repeats_within_datasets = train_repeats + valid_repeats + test_repeats
print("Total repeats within datasets: {}".format(repeats_within_datasets))
train_val_intersections = check_intersections(train_hashes_set, valid_hashes_set, "train", "valid")
train_test_intersections = check_intersections(train_hashes_set, test_hashes_set, "train", "valid")
val_test_intersections = check_intersections(valid_hashes_set, test_hashes_set, "valid", "test")
intersected_train_items = train_val_intersections.union(train_test_intersections)
intersected_valid_items = train_val_intersections.union(val_test_intersections)
intersected_test_items = train_test_intersections.union(val_test_intersections)
print("Number of unique items in {} that can be found in other sets: {}".format("train",len(intersected_train_items)))
print("Number of unique items in {} that can be found in other sets: {}".format("valid",len(intersected_valid_items)))
print("Number of unique items in {} that can be found in other sets: {}".format("test",len(intersected_test_items)))
all_hashes = train_hashes+valid_hashes+test_hashes
all_hashes_set = set(all_hashes)
all_repeats = check_within_dataset(all_hashes, all_hashes_set, "all")
repeats_percent = float(all_repeats)/float(len(all_hashes)) * 100
items_only_in_train = train_hashes_set - valid_hashes_set - test_hashes_set
items_only_in_valid = valid_hashes_set - train_hashes_set - test_hashes_set
items_only_in_test = test_hashes_set - train_hashes_set - valid_hashes_set
print("There are {} items that only exist in train".format(len(items_only_in_train)))
print("There are {} items that only exist in valid".format(len(items_only_in_valid)))
print("There are {} items that only exist in test".format(len(items_only_in_test)))
set([1, 2, 3]) - set([2]) - set([3])
print("Total percentage of repeat items in all datasets is about {:.2f} %".format(repeats_percent))
print("Total number of repeat items in all datasets is {}".format(all_repeats))
repeats_due_to_overlap = all_repeats - repeats_within_datasets
overlap_percent = float(repeats_due_to_overlap)/float(len(all_hashes)) *100
print("We previously found that repeats within datasets totaled to {}".format(repeats_within_datasets))
print("Repeats due to overlap is therefore {0}. \nOut of {1} total items, that gives an overlap percentage of {2:.2f}%".format(repeats_due_to_overlap, len(all_hashes), overlap_percent))
###Output
within train dataset, there are 200000 items, but only 187350 unique items, which works out to 12650 items that are repeats
within valid dataset, there are 10000 items, but only 9863 unique items, which works out to 137 items that are repeats
within test dataset, there are 10000 items, but only 9802 unique items, which works out to 198 items that are repeats
Total repeats within datasets: 12985
between train and valid there are 1003 unique items that are in both at least once
between train and valid there are 1174 unique items that are in both at least once
between valid and test there are 72 unique items that are in both at least once
Number of unique items in train that can be found in other sets: 2153
Number of unique items in valid that can be found in other sets: 1051
Number of unique items in test that can be found in other sets: 1222
within all dataset, there are 220000 items, but only 204790 unique items, which works out to 15210 items that are repeats
There are 185197 items that only exist in train
There are 8812 items that only exist in valid
There are 8580 items that only exist in test
Total percentage of repeat items in all datasets is about 6.91 %
Total number of repeat items in all datasets is 15210
We previously found that repeats within datasets totaled to 12985
Repeats due to overlap is therefore 2225.
Out of 220000 total items, that gives an overlap percentage of 1.01%
###Markdown
---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!---
###Code
#Training time.
def reshape_to_sklearn_format(dataset):
num_items, nx, ny = dataset.shape
return dataset.reshape(num_items, nx*ny)
default_settings_classifier = LogisticRegression()
num_to_train_with = 500
#num_to_train_with = len(train_dataset)
#sample_data_for_training, sample_labels_for_training = train_dataset[:num_to_train_with], train_labels[:num_to_train_with]
sample_data_for_training, sample_labels_for_training = get_matching_items_randomly_from_two_lists(train_dataset, train_labels, num_to_train_with)
num_to_test_with = 1010
sample_data_for_testing, sample_labels_for_testing = get_matching_items_randomly_from_two_lists(train_dataset, train_labels, num_to_test_with)
# Gotta reshape, according to https://stackoverflow.com/questions/34972142/sklearn-logistic-regression-valueerror-found-array-with-dim-3-estimator-expec
# Basically we want the 28x28 images flattened out.
sample_data_for_training = reshape_to_sklearn_format(sample_data_for_training)
sample_data_for_testing = reshape_to_sklearn_format(sample_data_for_testing)
default_settings_classifier.fit(sample_data_for_training, sample_labels_for_training)
df_score = default_settings_classifier.score(sample_data_for_testing, sample_labels_for_testing)
# Settings lifted from
# http://scikit-learn.org/stable/auto_examples/linear_model/plot_sparse_logistic_regression_mnist.html
# without understanding
fancy_settings_classifier = LogisticRegression(C=50. / num_to_train_with,
multi_class='multinomial',
penalty='l1', solver='saga', tol=0.1)
fancy_settings_classifier.fit(sample_data_for_training, sample_labels_for_training)
fs_score = fancy_settings_classifier.score(sample_data_for_testing, sample_labels_for_testing)
fancy_settings_classifier_l2 = LogisticRegression(C=50. / num_to_train_with,
multi_class='multinomial',
penalty='l2', solver='saga', tol=0.1)
fancy_settings_classifier_l2.fit(sample_data_for_training, sample_labels_for_training)
fs_l2_score = fancy_settings_classifier_l2.score(sample_data_for_testing, sample_labels_for_testing)
print("Score for classifier with default settings: {}".format(df_score))
print("Score for classifier with fancy settings and l1: {}".format(fs_score))
print("Score for classifier with fancy settings and l2: {}".format(fs_l2_score))
solver_list = ['newton-cg', 'lbfgs', 'sag', 'saga']
for choice in solver_list:
print("trying solver {}".format(choice)))
classifier = LogisticRegression(solver=choice, penalty='l2')
classifier.fit(sample_data_for_training, sample_labels_for_training)
classifier.score(sample_data_for_testing, sample_labels_for_testing)
print("Score for classifier using solver {0}: {1}".format(choice, df_score))
###Output
Score for classifier with default settings: 0.774257425743
Score for classifier with fancy settings: 0.70396039604
Score for classifier with fancy settings and l2: 0.805940594059
Score for classifier using solver newton-cg: 0.774257425743
Score for classifier using solver lbfgs: 0.774257425743
Score for classifier using solver sag: 0.774257425743
Score for classifier using solver saga: 0.774257425743
###Markdown
Deep Learning=============Assignment 1------------The objective of this assignment is to learn about simple data curation practices, and familiarize you with some of the data we'll be reusing later.This notebook uses the [notMNIST](http://yaroslavvb.blogspot.com/2011/09/notmnist-dataset.html) dataset to be used with python experiments. This dataset is designed to look like the classic [MNIST](http://yann.lecun.com/exdb/mnist/) dataset, while looking a little more like real data: it's a harder task, and the data is a lot less 'clean' than MNIST.
###Code
# These are all the modules we'll be using later. Make sure you can import them
# before proceeding further.
from __future__ import print_function
import matplotlib.pyplot as plt
import numpy as np
import os
import sys
import tarfile
from IPython.display import display, Image
from scipy import ndimage
from sklearn.linear_model import LogisticRegression
from six.moves.urllib.request import urlretrieve
from six.moves import cPickle as pickle
# Config the matlotlib backend as plotting inline in IPython
%matplotlib inline
###Output
_____no_output_____
###Markdown
First, we'll download the dataset to our local machine. The data consists of characters rendered in a variety of fonts on a 28x28 image. The labels are limited to 'A' through 'J' (10 classes). The training set has about 500k and the testset 19000 labelled examples. Given these sizes, it should be possible to train models quickly on any machine.
###Code
url = 'http://commondatastorage.googleapis.com/books1000/'
last_percent_reported = None
def download_progress_hook(count, blockSize, totalSize):
"""A hook to report the progress of a download. This is mostly intended for users with
slow internet connections. Reports every 1% change in download progress.
"""
global last_percent_reported
percent = int(count * blockSize * 100 / totalSize)
if last_percent_reported != percent:
if percent % 5 == 0:
sys.stdout.write("%s%%" % percent)
sys.stdout.flush()
else:
sys.stdout.write(".")
sys.stdout.flush()
last_percent_reported = percent
def maybe_download(filename, expected_bytes, force=False):
"""Download a file if not present, and make sure it's the right size."""
if force or not os.path.exists(filename):
print('Attempting to download:', filename)
filename, _ = urlretrieve(url + filename, filename, reporthook=download_progress_hook)
print('\nDownload Complete!')
statinfo = os.stat(filename)
if statinfo.st_size == expected_bytes:
print('Found and verified', filename)
else:
raise Exception(
'Failed to verify ' + filename + '. Can you get to it with a browser?')
return filename
train_filename = maybe_download('notMNIST_large.tar.gz', 247336696)
test_filename = maybe_download('notMNIST_small.tar.gz', 8458043)
###Output
Attempting to download: notMNIST_large.tar.gz
0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100%
Download Complete!
Found and verified notMNIST_large.tar.gz
Attempting to download: notMNIST_small.tar.gz
0%....5%....10%....15%....20%....25%....30%....35%....40%....45%....50%....55%....60%....65%....70%....75%....80%....85%....90%....95%....100%
Download Complete!
Found and verified notMNIST_small.tar.gz
###Markdown
Extract the dataset from the compressed .tar.gz file.This should give you a set of directories, labelled A through J.
###Code
num_classes = 10
np.random.seed(133)
def maybe_extract(filename, force=False):
root = os.path.splitext(os.path.splitext(filename)[0])[0] # remove .tar.gz
if os.path.isdir(root) and not force:
# You may override by setting force=True.
print('%s already present - Skipping extraction of %s.' % (root, filename))
else:
print('Extracting data for %s. This may take a while. Please wait.' % root)
tar = tarfile.open(filename)
sys.stdout.flush()
tar.extractall()
tar.close()
data_folders = [
os.path.join(root, d) for d in sorted(os.listdir(root))
if os.path.isdir(os.path.join(root, d))]
if len(data_folders) != num_classes:
raise Exception(
'Expected %d folders, one per class. Found %d instead.' % (
num_classes, len(data_folders)))
print(data_folders)
return data_folders
train_folders = maybe_extract(train_filename)
test_folders = maybe_extract(test_filename)
###Output
Extracting data for notMNIST_large. This may take a while. Please wait.
['notMNIST_large/A', 'notMNIST_large/B', 'notMNIST_large/C', 'notMNIST_large/D', 'notMNIST_large/E', 'notMNIST_large/F', 'notMNIST_large/G', 'notMNIST_large/H', 'notMNIST_large/I', 'notMNIST_large/J']
Extracting data for notMNIST_small. This may take a while. Please wait.
['notMNIST_small/A', 'notMNIST_small/B', 'notMNIST_small/C', 'notMNIST_small/D', 'notMNIST_small/E', 'notMNIST_small/F', 'notMNIST_small/G', 'notMNIST_small/H', 'notMNIST_small/I', 'notMNIST_small/J']
###Markdown
---Problem 1---------Let's take a peek at some of the data to make sure it looks sensible. Each exemplar should be an image of a character A through J rendered in a different font. Display a sample of the images that we just downloaded. Hint: you can use the package IPython.display.---
###Code
from IPython.display import Image
DIR = 'notMNIST_large/A/'
files = os.listdir(DIR)
ct = 10
for fil in files:
filname = DIR + fil
display(Image(filename=filname))
ct -= 1
if(ct == 0):
break
###Output
_____no_output_____
###Markdown
Now let's load the data in a more manageable format. Since, depending on your computer setup you might not be able to fit it all in memory, we'll load each class into a separate dataset, store them on disk and curate them independently. Later we'll merge them into a single dataset of manageable size.We'll convert the entire dataset into a 3D array (image index, x, y) of floating point values, normalized to have approximately zero mean and standard deviation ~0.5 to make training easier down the road. A few images might not be readable, we'll just skip them.
###Code
image_size = 28 # Pixel width and height.
pixel_depth = 255.0 # Number of levels per pixel.
def load_letter(folder, min_num_images):
"""Load the data for a single letter label."""
image_files = os.listdir(folder)
dataset = np.ndarray(shape=(len(image_files), image_size, image_size),
dtype=np.float32)
print(folder)
num_images = 0
for image in image_files:
image_file = os.path.join(folder, image)
try:
image_data = (ndimage.imread(image_file).astype(float) -
pixel_depth / 2) / pixel_depth
if image_data.shape != (image_size, image_size):
raise Exception('Unexpected image shape: %s' % str(image_data.shape))
dataset[num_images, :, :] = image_data
num_images = num_images + 1
except IOError as e:
print('Could not read:', image_file, ':', e, '- it\'s ok, skipping.')
dataset = dataset[0:num_images, :, :]
if num_images < min_num_images:
raise Exception('Many fewer images than expected: %d < %d' %
(num_images, min_num_images))
print('Full dataset tensor:', dataset.shape)
print('Mean:', np.mean(dataset))
print('Standard deviation:', np.std(dataset))
return dataset
def maybe_pickle(data_folders, min_num_images_per_class, force=False):
dataset_names = []
for folder in data_folders:
set_filename = folder + '.pickle'
dataset_names.append(set_filename)
if os.path.exists(set_filename) and not force:
# You may override by setting force=True.
print('%s already present - Skipping pickling.' % set_filename)
else:
print('Pickling %s.' % set_filename)
dataset = load_letter(folder, min_num_images_per_class)
try:
with open(set_filename, 'wb') as f:
pickle.dump(dataset, f, pickle.HIGHEST_PROTOCOL)
except Exception as e:
print('Unable to save data to', set_filename, ':', e)
return dataset_names
train_datasets = maybe_pickle(train_folders, 45000)
test_datasets = maybe_pickle(test_folders, 1800)
###Output
Pickling notMNIST_large/A.pickle.
notMNIST_large/A
Could not read: notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png : cannot identify image file 'notMNIST_large/A/Um9tYW5hIEJvbGQucGZi.png' - it's ok, skipping.
Could not read: notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png : cannot identify image file 'notMNIST_large/A/RnJlaWdodERpc3BCb29rSXRhbGljLnR0Zg==.png' - it's ok, skipping.
Could not read: notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png : cannot identify image file 'notMNIST_large/A/SG90IE11c3RhcmQgQlROIFBvc3Rlci50dGY=.png' - it's ok, skipping.
Full dataset tensor: (52909, 28, 28)
Mean: -0.12825
Standard deviation: 0.443121
Pickling notMNIST_large/B.pickle.
notMNIST_large/B
Could not read: notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png : cannot identify image file 'notMNIST_large/B/TmlraXNFRi1TZW1pQm9sZEl0YWxpYy5vdGY=.png' - it's ok, skipping.
Full dataset tensor: (52911, 28, 28)
Mean: -0.00756303
Standard deviation: 0.454491
Pickling notMNIST_large/C.pickle.
notMNIST_large/C
Full dataset tensor: (52912, 28, 28)
Mean: -0.142258
Standard deviation: 0.439807
Pickling notMNIST_large/D.pickle.
notMNIST_large/D
Could not read: notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png : cannot identify image file 'notMNIST_large/D/VHJhbnNpdCBCb2xkLnR0Zg==.png' - it's ok, skipping.
Full dataset tensor: (52911, 28, 28)
Mean: -0.0573677
Standard deviation: 0.455647
Pickling notMNIST_large/E.pickle.
notMNIST_large/E
Full dataset tensor: (52912, 28, 28)
Mean: -0.069899
Standard deviation: 0.452941
Pickling notMNIST_large/F.pickle.
notMNIST_large/F
Full dataset tensor: (52912, 28, 28)
Mean: -0.125583
Standard deviation: 0.447089
Pickling notMNIST_large/G.pickle.
notMNIST_large/G
Full dataset tensor: (52912, 28, 28)
Mean: -0.0945813
Standard deviation: 0.44624
Pickling notMNIST_large/H.pickle.
notMNIST_large/H
Full dataset tensor: (52912, 28, 28)
Mean: -0.0685222
Standard deviation: 0.454232
Pickling notMNIST_large/I.pickle.
notMNIST_large/I
Full dataset tensor: (52912, 28, 28)
Mean: 0.0307862
Standard deviation: 0.468899
Pickling notMNIST_large/J.pickle.
notMNIST_large/J
Full dataset tensor: (52911, 28, 28)
Mean: -0.153359
Standard deviation: 0.443656
Pickling notMNIST_small/A.pickle.
notMNIST_small/A
Could not read: notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png : cannot identify image file 'notMNIST_small/A/RGVtb2NyYXRpY2FCb2xkT2xkc3R5bGUgQm9sZC50dGY=.png' - it's ok, skipping.
Full dataset tensor: (1872, 28, 28)
Mean: -0.132626
Standard deviation: 0.445128
Pickling notMNIST_small/B.pickle.
notMNIST_small/B
Full dataset tensor: (1873, 28, 28)
Mean: 0.00535608
Standard deviation: 0.457115
Pickling notMNIST_small/C.pickle.
notMNIST_small/C
Full dataset tensor: (1873, 28, 28)
Mean: -0.141521
Standard deviation: 0.44269
Pickling notMNIST_small/D.pickle.
notMNIST_small/D
Full dataset tensor: (1873, 28, 28)
Mean: -0.0492167
Standard deviation: 0.459759
Pickling notMNIST_small/E.pickle.
notMNIST_small/E
Full dataset tensor: (1873, 28, 28)
Mean: -0.0599148
Standard deviation: 0.45735
Pickling notMNIST_small/F.pickle.
notMNIST_small/F
Could not read: notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png : cannot identify image file 'notMNIST_small/F/Q3Jvc3NvdmVyIEJvbGRPYmxpcXVlLnR0Zg==.png' - it's ok, skipping.
Full dataset tensor: (1872, 28, 28)
Mean: -0.118185
Standard deviation: 0.452279
Pickling notMNIST_small/G.pickle.
notMNIST_small/G
Full dataset tensor: (1872, 28, 28)
Mean: -0.0925503
Standard deviation: 0.449006
Pickling notMNIST_small/H.pickle.
notMNIST_small/H
Full dataset tensor: (1872, 28, 28)
Mean: -0.0586892
Standard deviation: 0.458759
Pickling notMNIST_small/I.pickle.
notMNIST_small/I
Full dataset tensor: (1872, 28, 28)
Mean: 0.0526451
Standard deviation: 0.471894
Pickling notMNIST_small/J.pickle.
notMNIST_small/J
Full dataset tensor: (1872, 28, 28)
Mean: -0.151689
Standard deviation: 0.448014
###Markdown
---Problem 2---------Let's verify that the data still looks good. Displaying a sample of the labels and images from the ndarray. Hint: you can use matplotlib.pyplot.---
###Code
train_dataset = pickle.load(open(train_datasets[0], "rb"))
idx=np.random.randint(0, len(train_dataset))
%matplotlib inline
plt.imshow(train_dataset[idx])
plt.title('A')
###Output
_____no_output_____
###Markdown
---Problem 3---------Another check: we expect the data to be balanced across classes. Verify that.---
###Code
for ch in range(ord('a'), ord('j') + 1):
with open(train_datasets[ch - ord('a')], "rb") as fil:
data = pickle.load(fil)
print("Size for class " + chr(ch) + " is " + str(len(data)))
###Output
Size for class a is 52909
Size for class b is 52911
Size for class c is 52912
Size for class d is 52911
Size for class e is 52912
Size for class f is 52912
Size for class g is 52912
Size for class h is 52912
Size for class i is 52912
Size for class j is 52911
###Markdown
Merge and prune the training data as needed. Depending on your computer setup, you might not be able to fit it all in memory, and you can tune `train_size` as needed. The labels will be stored into a separate array of integers 0 through 9.Also create a validation dataset for hyperparameter tuning.
###Code
def make_arrays(nb_rows, img_size):
if nb_rows:
dataset = np.ndarray((nb_rows, img_size, img_size), dtype=np.float32)
labels = np.ndarray(nb_rows, dtype=np.int32)
else:
dataset, labels = None, None
return dataset, labels
def merge_datasets(pickle_files, train_size, valid_size=0):
num_classes = len(pickle_files)
valid_dataset, valid_labels = make_arrays(valid_size, image_size)
train_dataset, train_labels = make_arrays(train_size, image_size)
vsize_per_class = valid_size // num_classes
tsize_per_class = train_size // num_classes
start_v, start_t = 0, 0
end_v, end_t = vsize_per_class, tsize_per_class
end_l = vsize_per_class+tsize_per_class
for label, pickle_file in enumerate(pickle_files):
try:
with open(pickle_file, 'rb') as f:
letter_set = pickle.load(f)
# let's shuffle the letters to have random validation and training set
np.random.shuffle(letter_set)
if valid_dataset is not None:
valid_letter = letter_set[:vsize_per_class, :, :]
valid_dataset[start_v:end_v, :, :] = valid_letter
valid_labels[start_v:end_v] = label
start_v += vsize_per_class
end_v += vsize_per_class
train_letter = letter_set[vsize_per_class:end_l, :, :]
train_dataset[start_t:end_t, :, :] = train_letter
train_labels[start_t:end_t] = label
start_t += tsize_per_class
end_t += tsize_per_class
except Exception as e:
print('Unable to process data from', pickle_file, ':', e)
raise
return valid_dataset, valid_labels, train_dataset, train_labels
train_size = 200000
valid_size = 10000
test_size = 10000
valid_dataset, valid_labels, train_dataset, train_labels = merge_datasets(
train_datasets, train_size, valid_size)
_, _, test_dataset, test_labels = merge_datasets(test_datasets, test_size)
print('Training:', train_dataset.shape, train_labels.shape)
print('Validation:', valid_dataset.shape, valid_labels.shape)
print('Testing:', test_dataset.shape, test_labels.shape)
###Output
Training: (200000, 28, 28) (200000,)
Validation: (10000, 28, 28) (10000,)
Testing: (10000, 28, 28) (10000,)
###Markdown
Next, we'll randomize the data. It's important to have the labels well shuffled for the training and test distributions to match.
###Code
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
train_dataset, train_labels = randomize(train_dataset, train_labels)
test_dataset, test_labels = randomize(test_dataset, test_labels)
valid_dataset, valid_labels = randomize(valid_dataset, valid_labels)
###Output
_____no_output_____
###Markdown
---Problem 4---------Convince yourself that the data is still good after shuffling!---
###Code
%matplotlib inline
idx=np.random.randint(0, len(train_dataset))
plt.imshow(train_dataset[idx])
plt.title(chr(train_labels[idx]+ord('A')))
###Output
_____no_output_____
###Markdown
Finally, let's save the data for later reuse:
###Code
pickle_file = 'notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle.dump(save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
statinfo = os.stat(pickle_file)
print('Compressed pickle size:', statinfo.st_size)
###Output
Compressed pickle size: 690800441
###Markdown
---Problem 5---------By construction, this dataset might contain a lot of overlapping samples, including training data that's also contained in the validation and test set! Overlap between training and test can skew the results if you expect to use your model in an environment where there is never an overlap, but are actually ok if you expect to see training samples recur when you use it.Measure how much overlap there is between training, validation and test samples.Optional questions:- What about near duplicates between datasets? (images that are almost identical)- Create a sanitized validation and test set, and compare your accuracy on those in subsequent assignments.---
###Code
from hashlib import md5
%time train = set([md5(x).hexdigest() for x in train_dataset])
%time test = set([md5(x).hexdigest() for x in test_dataset])
%time valid = set([md5(x).hexdigest() for x in valid_dataset])
print("Overlap TRAIN TEST = ", len(train.intersection(test)))
print("Overlap TRAIN VALID = ", len(train.intersection(valid)))
print("Overlap VALID TEST = ", len(valid.intersection(test)))
total_dataset = np.concatenate((train_dataset, test_dataset, valid_dataset))
total_labels = np.concatenate((train_labels, test_labels, valid_labels))
dataset = []
labels = []
hashes=set()
for i in xrange(len(total_dataset)):
cur = md5(total_dataset[i]).hexdigest()
if not cur in hashes:
hashes.add(cur)
dataset.append(total_dataset[i])
labels.append(total_labels[i])
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
dataset, labels = randomize(np.array(dataset), np.array(labels))
valid_dataset = dataset[:10000]
valid_labels = labels[:10000]
test_dataset = dataset[10000:20000]
test_labels = labels[10000:20000]
train_dataset = dataset[20000:]
train_labels = labels[20000:]
sanitized_save = {
'train_dataset': train_dataset,
'train_labels': train_labels,
'valid_dataset': valid_dataset,
'valid_labels': valid_labels,
'test_dataset': test_dataset,
'test_labels': test_labels,
}
pickle_file = 'sanitized_notMNIST.pickle'
try:
f = open(pickle_file, 'wb')
pickle.dump(sanitized_save, f, pickle.HIGHEST_PROTOCOL)
f.close()
except Exception as e:
print('Unable to save data to', pickle_file, ':', e)
raise
###Output
_____no_output_____
###Markdown
---Problem 6---------Let's get an idea of what an off-the-shelf classifier can give you on this data. It's always good to check that there is something to learn, and that it's a problem that is not so trivial that a canned solution solves it.Train a simple model on this data using 50, 100, 1000 and 5000 training samples. Hint: you can use the LogisticRegression model from sklearn.linear_model.Optional question: train an off-the-shelf model on all the data!---
###Code
from sklearn import linear_model
from six.moves import cPickle as pickle
def randomize(dataset, labels):
permutation = np.random.permutation(labels.shape[0])
shuffled_dataset = dataset[permutation,:]
shuffled_labels = labels[permutation]
return shuffled_dataset, shuffled_labels
model = linear_model.LogisticRegression()
with open("notMNIST.pickle", "rb") as f:
overlapped_dataset = pickle.load(f)
train_dataset = [x.flatten() for x in overlapped_dataset['train_dataset']]
train_labels = overlapped_dataset['train_labels']
test_dataset = [x.flatten() for x in overlapped_dataset['test_dataset']]
test_labels = overlapped_dataset['test_labels']
train_dataset, train_labels = randomize(np.array(train_dataset), np.array(train_labels))
test_dataset, test_labels = randomize(np.array(test_dataset), np.array(test_labels))
model_50 = model.fit(train_dataset[:50], train_labels[:50])
model_100 = model.fit(train_dataset[:100], train_labels[:100])
model_1000 = model.fit(train_dataset[:1000], train_labels[:1000])
model_5000 = model.fit(train_dataset[:5000], train_labels[:5000])
print("score 50 data samples " + str(model_50.score(test_dataset, test_labels)))
print("score 100 data samples " + str(model_100.score(test_dataset, test_labels)))
print("score 1000 data samples " + str(model_1000.score(test_dataset, test_labels)))
print("score 5000 data samples " + str(model_5000.score(test_dataset, test_labels)))
print("Done.")
with open("sanitized_notMNIST.pickle", "rb") as f:
sanitized_dataset = pickle.load(f)
train_dataset = [x.flatten() for x in sanitized_dataset['train_dataset']]
train_labels = sanitized_dataset['train_labels']
test_dataset = [x.flatten() for x in sanitized_dataset['test_dataset']]
test_labels = sanitized_dataset['test_labels']
train_dataset, train_labels = randomize(np.array(train_dataset), np.array(train_labels))
test_dataset, test_labels = randomize(np.array(test_dataset), np.array(test_labels))
model_50 = model.fit(train_dataset[:50], train_labels[:50])
model_100 = model.fit(train_dataset[:100], train_labels[:100])
model_1000 = model.fit(train_dataset[:1000], train_labels[:1000])
model_5000 = model.fit(train_dataset[:5000], train_labels[:5000])
print("sanitized score 50 data samples " + str(model_50.score(test_dataset, test_labels)))
print("sanitized score 100 data samples " + str(model_100.score(test_dataset, test_labels)))
print("sanitized score 1000 data samples " + str(model_1000.score(test_dataset, test_labels)))
print("sanitized score 5000 data samples " + str(model_5000.score(test_dataset, test_labels)))
print("Done.")
###Output
_____no_output_____ |
08_Average Brightness Feature Extraction/Average Brightness.ipynb | ###Markdown
Day and Night Image Classifier---The day/night image dataset consists of 200 RGB color images in two categories: day and night. There are equal numbers of each example: 100 day images and 100 night images.We'd like to build a classifier that can accurately label these images as day or night, and that relies on finding distinguishing features between the two types of images!*Note: All images come from the [AMOS dataset](http://cs.uky.edu/~jacobs/datasets/amos/) (Archive of Many Outdoor Scenes).* Import resourcesBefore you get started on the project code, import the libraries and resources that you'll need.
###Code
import cv2 # computer vision library
import helpers
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.image as mpimg
%matplotlib inline
###Output
_____no_output_____
###Markdown
Training and Testing DataThe 200 day/night images are separated into training and testing datasets. * 60% of these images are training images, for you to use as you create a classifier.* 40% are test images, which will be used to test the accuracy of your classifier.First, we set some variables to keep track of some where our images are stored: image_dir_training: the directory where our training image data is stored image_dir_test: the directory where our test image data is stored
###Code
# Image data directories
image_dir_training = "../00_Data/day_night_images/training/"
image_dir_test = "../00_Data/day_night_images/test/"
###Output
_____no_output_____
###Markdown
Load the datasetsThese first few lines of code will load the training day/night images and store all of them in a variable, `IMAGE_LIST`. This list contains the images and their associated label ("day" or "night"). For example, the first image-label pair in `IMAGE_LIST` can be accessed by index: ``` IMAGE_LIST[0][:]```.
###Code
# Using the load_dataset function in helpers.py
# Load training data
IMAGE_LIST = helpers.load_dataset(image_dir_training)
###Output
_____no_output_____
###Markdown
Construct a `STANDARDIZED_LIST` of input images and output labels.This function takes in a list of image-label pairs and outputs a **standardized** list of resized images and numerical labels.
###Code
# Standardize all training images
STANDARDIZED_LIST = helpers.standardize(IMAGE_LIST)
###Output
_____no_output_____
###Markdown
Visualize the standardized dataDisplay a standardized image from STANDARDIZED_LIST.
###Code
# Display a standardized image and its label
# Select an image by index
image_num = 0
selected_image = STANDARDIZED_LIST[image_num][0]
selected_label = STANDARDIZED_LIST[image_num][1]
# Display image and data about it
plt.imshow(selected_image)
print("Shape: "+str(selected_image.shape))
print("Label [1 = day, 0 = night]: " + str(selected_label))
###Output
Shape: (600, 1100, 3)
Label [1 = day, 0 = night]: 1
###Markdown
Feature ExtractionCreate a feature that represents the brightness in an image. We'll be extracting the **average brightness** using HSV colorspace. Specifically, we'll use the V channel (a measure of brightness), add up the pixel values in the V channel, then divide that sum by the area of the image to get the average Value of the image. RGB to HSV conversionBelow, a test image is converted from RGB to HSV colorspace and each component is displayed in an image.
###Code
# Convert and image to HSV colorspace
# Visualize the individual color channels
image_num = 0
test_im = STANDARDIZED_LIST[image_num][0]
test_label = STANDARDIZED_LIST[image_num][1]
# Convert to HSV
hsv = cv2.cvtColor(test_im, cv2.COLOR_RGB2HSV)
# Print image label
print('Label: ' + str(test_label))
# HSV channels
h = hsv[:,:,0]
s = hsv[:,:,1]
v = hsv[:,:,2]
# Plot the original image and the three channels
f, (ax1, ax2, ax3, ax4) = plt.subplots(1, 4, figsize=(20,10))
ax1.set_title('Standardized image')
ax1.imshow(test_im)
ax2.set_title('H channel')
ax2.imshow(h, cmap='gray')
ax3.set_title('S channel')
ax3.imshow(s, cmap='gray')
ax4.set_title('V channel')
ax4.imshow(v, cmap='gray')
###Output
Label: 1
###Markdown
--- Find the average brightness using the V channelThis function takes in a **standardized** RGB image and returns a feature (a single value) that represent the average level of brightness in the image. We'll use this value to classify the image as day or night.
###Code
# Find the average Value or brightness of an image
def avg_brightness(rgb_image):
# Convert image to HSV
hsv = cv2.cvtColor(rgb_image, cv2.COLOR_RGB2HSV)
# Add up all the pixel values in the V channel
sum_brightness = np.sum(hsv[:,:,2])
## TODO: Calculate the average brightness using the area of the image
# and the sum calculated above
avg = sum_brightness /
return avg
# Testing average brightness levels
# Look at a number of different day and night images and think about
# what average brightness value separates the two types of images
# As an example, a "night" image is loaded in and its avg brightness is displayed
image_num = 190
test_im = STANDARDIZED_LIST[image_num][0]
avg = avg_brightness(test_im)
print('Avg brightness: ' + str(avg))
plt.imshow(test_im)
###Output
_____no_output_____ |
RECOMMENDATION SYSTEMS LOKESH DUVVURU PROJECT (1).ipynb | ###Markdown
RECOMMENDATION SYSTEMS PROJECT BY DUVVURU LOKESH DATASET : ELECTRONICS DATASET FROM Amazon reviews data
###Code
# importing necessary libraries
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
from surprise import KNNWithMeans
from surprise.model_selection import train_test_split
from surprise import Reader
from surprise import Dataset
from surprise import accuracy
import os
from collections import defaultdict
from surprise import SVD
from sklearn.decomposition import TruncatedSVD
###Output
_____no_output_____
###Markdown
1-- READ AND EXPLORE THE DATASET
###Code
#reading the data and re-naming the columns
data = pd.read_csv("ratings_Electronics.csv" ,names=['userId', 'productId','Rating','timestamp'])
#head of the data
data.head()
# shape of the data
data.shape
###Output
_____no_output_____
###Markdown
There are 7824482 rows and 4 columns
###Code
# 5 point summary
data.describe()
# No of unique users
len(np.unique(data.userId))
###Output
_____no_output_____
###Markdown
There are 4201696 users who rated
###Code
# No of unique products
len(np.unique(data.productId))
###Output
_____no_output_____
###Markdown
There are 476002 products
###Code
# dropping the time stamp column
df=data.drop(['timestamp'], axis = 1)
###Output
_____no_output_____
###Markdown
timestamp column dropped as it is not useful in this case
###Code
# head of the new dataset
df.head()
#Info of new dataset
df.info()
df.isna().apply(pd.value_counts) #checking the presence of missing values
###Output
_____no_output_____
###Markdown
No missing values
###Code
# countplot
sns.countplot(data=df , x='Rating')
plt.show()
###Output
_____no_output_____
###Markdown
Here maximum number products gave rating of 5 2-- TAKE SUBSET OF DATA
###Code
# NO of ratings given by each user
no_of_rated_products_per_user = df.groupby(by='userId')['Rating'].count().sort_values(ascending=False)
no_of_rated_products_per_user.head()
# NO of users who rated 75 or more products
sum(no_of_rated_products_per_user >= 75)
###Output
_____no_output_____
###Markdown
582 users rated atleast 75 products
###Code
# creating subset of of original data
e_df=df.groupby("productId").filter(lambda x:x['Rating'].count() >=75)
###Output
_____no_output_____
###Markdown
Here i am creating subset with users who atleast rated 75 products
###Code
#Head of subset data
e_df.head()
e_df.shape
###Output
_____no_output_____
###Markdown
4-- POPULARITY RECOMMENDER MODEL
###Code
e_df.groupby('productId')['Rating'].mean().head()
###Output
_____no_output_____
###Markdown
Mean rating for each product is determined by grouping productID and rating
###Code
e_df.groupby('productId')['Rating'].mean().sort_values(ascending=False).head()
###Output
_____no_output_____
###Markdown
Mean rating of products in descending order. This gives the most highly rated product at top
###Code
e_df.groupby('productId')['Rating'].count().sort_values(ascending=False).head()
###Output
_____no_output_____
###Markdown
Most ratings for a single product is at the top of this count values
###Code
ratings_mean_count = pd.DataFrame(e_df.groupby('productId')['Rating'].mean())
###Output
_____no_output_____
###Markdown
New dataframe ratings_mean_count is created and mean values are included
###Code
ratings_mean_count['Rating_counts'] = pd.DataFrame(e_df.groupby('productId')['Rating'].count())
###Output
_____no_output_____
###Markdown
Rating_counts values are included to the ratings_mean_count dataframe
###Code
ratings_mean_count.head()
###Output
_____no_output_____
###Markdown
Head of the ratings_mean_count dataframe is shown above
###Code
ratings_mean_count.sort_values(by='Rating_counts', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
The most popular products are given by assembling ratings_mean_count dataframe in descending order
###Code
popular_products = pd.DataFrame(e_df.groupby('productId')['Rating'].count())
most_popular = popular_products.sort_values('Rating', ascending=False)
most_popular.head(15).plot(kind = "bar")
###Output
_____no_output_____
###Markdown
By the above graph we can see the top 15 products recommended 3-- SPLITTING THE DATA 70/30 RATIO
###Code
#Reading the dataset
reader = Reader(rating_scale=(1, 5))
ee_df = Dataset.load_from_df(e_df,reader)
trainset, testset = train_test_split(ee_df, test_size=0.3,random_state=10)
###Output
_____no_output_____
###Markdown
5 & 7-- COLABARATIVE FILTERING MODEL BY USING K=5
###Code
# Fitting the data with k = 5
e_algo = KNNWithMeans(k=5, sim_options={'name': 'pearson_baseline', 'user_based': False})
#Training the dataset
e_algo.fit(trainset)
#Test set
test_pred = e_algo.test(testset)
#Test predictions
test_pred
###Output
_____no_output_____
###Markdown
Top five recommendations for each user is given above 6-- RMSE VALUE
###Code
print("Item-Item collobarative model : Test Set")
accuracy.rmse(test_pred, verbose=True)
###Output
Item-Item collobarative model : Test Set
RMSE: 1.3313
###Markdown
Here RMSE value is close to one which not bad MODEL BASED COLLABARATIVE FILTERING TO GIVE TOP 5 PRODUCT RECOMMENDATIONS
###Code
# pivot table for first 50000 users
et_df=e_df.head(50000)
ratings_matrix = et_df.pivot_table(values='Rating', index='userId', columns='productId', fill_value=0)
ratings_matrix.head()
#Shape of ratings_matrix
ratings_matrix.shape
# Transpose of ratings_matrix
X = ratings_matrix.T
X.head()
#shape of X matrix
X.shape
#Using truncated SVD to find decomposed matrix
SVD = TruncatedSVD(n_components=10)
decomposed_matrix = SVD.fit_transform(X)
decomposed_matrix.shape
decomposed_matrix
correlation_matrix = np.corrcoef(decomposed_matrix)
correlation_matrix.shape
correlation_matrix
X.index[119]
# index of the product purchased by customer
i = "B00004UE2R"
product_names = list(X.index)
product_ID = product_names.index(i)
product_ID
# Correlation matrix
correlation_product_ID = correlation_matrix[product_ID]
correlation_product_ID.shape
Recommend = list(X.index[correlation_product_ID > 0.45])
# Removes the item already bought by the customer
Recommend.remove(i)
Recommend[0:5]
###Output
_____no_output_____ |
9-Coordinates-Projections-and-Grids.ipynb | ###Markdown
Coordinates, Projections, and Grids Synopisis- Review of coordinate systems and projections of the sphere onto the 2d placne- Discussion about lengths and areas in finite volume grids used by ESMs
###Code
%run -i figure-scripts/init.py
###Output
_____no_output_____
###Markdown
Coordinate systemsA coordinate system allows us to (uniquely) specify points in space. You should be familiar with the Cartesian 2d coordinate system where, by convention, (x,y) are the normal distances, measured in the same units, from two perpendicular lines through the origin.We live in a 3d world (referring to spatial dimensions) and so require 3 numerical values to label a point in space. In Cartesian coordinates they might be (x,y,z) referenced to the center of the Earth, with z being height above the equatorial plane (positive in the direction of the North pole), y being the distance from a plane through the poles and a reference meridian, and x the distance from the plane perpendicular to both other planes. The equations of motion used by many models are often derived starting in this Cartesian coordinate system. However, these Cartesian coordinates are inconvenient to use in practice because we live on the surface of a sphere and "up", as defined by gravity, is sometimes increasing z (at the North Pole), and sometimes changing x or y (at the Equator). Spherical coordinatesIn ESMs we typically use spherical coordinates, $\lambda$, $\phi$ and $r$, where $\lambda$ is "longitude", a rotation angle eastward around the poles starting at a reference meridian; $\phi$ is "latitude", an elevation angle from the Equatorial plane (positive in Northern hemisphere), and $r$ is the radial distance from the center of the Earth. $\lambda,\phi,r$ are related to Cartesian $x,y,z$ by some simple relations:$$\begin{pmatrix}x \\ y \\ z\end{pmatrix} =\begin{pmatrix}r \cos \phi \cos \lambda \\ r \cos \phi \sin \lambda \\ r \sin \phi\end{pmatrix}$$Note that $r, x, y, z$ are all in the same units (eg. kilometets or meters) and $\lambda,\phi$ are angles usually given in degrees or radians.__Coordinate singularities:__ At the North and South poles of the coordinate system, $\phi = \pm \pi/2 = \pm 90^\circ$, all values of longitude refer to the same point. There is no "east" when you are positioned at the pole. This has many consequences, but one of the more fundamental is that spherical coordinates are not a good coordinate to use to design a discretization of the spherical domain.__Periodic coordinates:__ While a tuple of longitude, latitude and radius unambiguously define a point in space, given a point in space you there are multiple valid longitudes that refer to the same point. Longitude is cyclic ($\pm360^\circ$ is equivalent to $0^\circ$). This can cause problems in practice, particularly for plotting spherical data for which effort is sometimes needed to handle the periodicity. Geographic coordinatesWe live on the surface of the Earth and to precisely refer to points near the Earth's surface requires a properly defined geographic coordinate system. A common choice of coordinates is latitude, longitude and altitude, where altitude is height above a particular surface. Unfortunately the Earth is not spherical and that reference surface is better approximated as an ellipsoidal.In order to be unambiguous about the definition of coordinates, map-makers choose a reference ellipse with a agreed upon scale and orientation. They then choose the most appropriate mapping of the spherical coordinate system onto that ellipsoid, called a _geodetic datum_. A widely used global datum includes the [World Geodetic System](https://en.wikipedia.org/wiki/World_Geodetic_System) (WGS 84), the default datum used for the Global Positioning System. When you are given a latitude-longitude pair of values, strictly speaking without the geodetic datum, the is some ambiguity about the actual physical point being referred to. For ESMs, the datum is rarely provided and this is because ESMs almost universally approximate the Earth as a sphere and use spherical coordinates for referencing locations. This means some approximation is required when comparing real-world positions and model positions.The latitude and longitude using these horizontal datums are the spherical coordinates of the point on an ellipse. It you draw a straight line from the point on the ellipsoidal to the center, if passes through all spheres co-centered with the same latitude and longitude.Different datum have different reference points and scales, and so longitude and latitude can differ between geodetic datum. ProjectionsTo view data covering the surface of a sphere, or the Earth, we have to project that 3d surface into 2d. Imagine peeling the rind off an orange in one piece and then trying to flatten it onto a table top; the curvature in the peel requires you to distort the rind or make cuts, in order to flatten it fully. This is the function of the map projections and distortion is inevitable. Some projection preserve properties such as relative angles between lines, or relative area, but there is no projection of the surface of the sphere that can avoid distortion of some form.A projection maps the longitude and latitude of spherical coordinates into a new coordinate system. Very confusingly, sometimes the projection coordinates will be called longitude and latitude too! The projection coordinates are meaningless unless you know what the projection is so you often find a reference to the projection in the meta-data of coordinates; it means the longitude and latitude are not spherical coordinate but projection coordinates.
###Code
%run -i figure-scripts/some-projections.py
###Output
_____no_output_____
###Markdown
Figure 1: The colored circles in these plots are circles in the tangent plane to the sphere and projected onto the surface. The various projections can distort the circles. The circles are separated zonally or meridionally by 60$^\circ$. In 1a, a perspective image of the sphere, the circles appear non-circular because of the viewing angle. The blue circle appears circular because we are viewing it from directly overhead. The projection in 1b is the easy to use Plate-Carrée projection, a "lat-lon" plot, in which circles are stretched zonally with distance from the equator. 1d shows the Mercator projection in which circles remain circles but are expanded in size away from th equator. 1c shows the Robinson projection which compromises between the two. The purple dashed lines is a straight line in latitude-longitude coordinates, and the yellow dashed line is a straight line in the Mercator coordinates. The cyan dashed line is a great arc, and is straight in the perspective view because we are viewing it from directly overhead.The two most useful projections are the equirectangular and Mercator projections. Equirectangular projectionThis is the simplest projection, sometimes thought of a non-projection which is incorrect. In general it takes the form$$\begin{align}x & = R \left( \lambda - \lambda_0 \right) \cos \phi_0 \\y & = R \left( \phi - \phi_0 \right)\end{align}$$The origin of the plot, $(x,y)=(0,0)$ corresponds to $(\lambda,\phi)=(\lambda_0,\phi_0)$. The $\cos \phi_0$ term is a constant and the most common choice of $\phi_0=0$ gives the plate carrée projection, which means "flat square" in French. In this case, the projection is simply$$\begin{align}x & = R \left( \lambda - \lambda_0 \right) \\y & = R \phi\end{align}$$Distances in the y-direction are proportional to the meridional direction on the sphere, but the x-direction are stretched zonally, more so further from the equator. This is apparent in the orange and green circles in figure 1b, where the heights or the loops are the same as circles on the equator but the width is markedly increased.In the cartopy package, this projection is called "Plate-Carrée" which is French for flat square. Other names for this projection are equidistant cylindrical projection and geographic projection. See https://en.wikipedia.org/wiki/Equirectangular_projection. Mercator projectionThe Mercator projection has the same stretching in the x-direction as the equirectangular projection but, in order to preserve shape, it also stretches the y direction so that infinitesimal elements are stretched isotropically (the y-stretching is equal to the x-stretching).$$\begin{align}x & = R \left( \lambda - \lambda_0 \right) \\y & = R \tanh^{-1} \left( \sin \phi \right)\end{align}$$At the polar singularities, the x-stretching is infinite so y becomes infinite and the Mercator projection can never show the poles. See https://en.wikipedia.org/wiki/Mercator_projection. LinesA length of a line between two points is a function of the path taken. On the surface of sphere, the shortest path between two given points is a great arc. A great arc does not appear straight in many projections. Unfortunately, many grid calculations use a great arc for the length of a line between nodes on a model grid, which can be inconsistent with the constraints or assumptions about the grid.The dashed curves in figure 1 are "straight" lines between two points in various projections. The cyan dashed curve is a great arc. The purple dashed curve is a straight line in the Plate-Carree projection (latitude-longitude space) and the yellow dashed curve is a straight line in the Mercator projection. All are curved in most other projections. To describe a straight line in some projection then _the projection must be known_, irrespective of the coordinate system defining the end points. That is, we can define the end points of the line in latitude-longitude coordinates but say a line is straight in the Mercator projection, and by so doing unambiguously define that line.In the Mercator projection, the length of a line is $\frac{R}{\cos \alpha} \Delta \phi$ where $\tan \alpha = \frac{\Delta y}{\Delta x}$ ESM gridsMany ESMs use quadrilateral grids to discretize the surface of the sphere. The following discussion also applies to fully unstructured grids built from polygons but here we use quadrilateral grids for simplicity. There are also grids that have cuts and joins but here we'll stick to space-filling grids that are logically rectangular, meaning they can be stored in rectangular arrays in computer memory and referenced with a pair of indices ($i,j$ by convention).A quadrilateral grid is a rectangular mesh of adjacent quadrilateral cells that share edges and vertexes. Although the mesh and the cell are logically rectangular they might be physically curvilinear. From the grid we require positions of nodes, distances along edges, and areas of cells.If we choose a coordinate system with which to record the locations of mesh nodes, say spherical latitude-longitude with appropriate definitions, then we can unambiguously define those node locations. We could describe the exact same grid using a different coordinate system, say 3D Cartesian coordinates. The physical positions of the nodes of the grids are part of what define the grid, but the choice of coordinates with which we describe those positions does not change the grid.The edges of each cell are a curve between two adjacent nodes but the particular path of the curve has to be defined. Different paths will have different lengths. Similarly, the particular paths of the cell edges will determine the cell area. Thus the path of the cell edges is a fundamental component of a model grid needed for calculating the lengths and areas on a grid. Simple spherical coordinate gridBefore we discuss the best choice for defining a curve between points, let's briefly define a simple spherical-coordinate grid.The mesh is formed of lines of constant longitude and lines of constant latitude.Let $i \in 0,1,\ldots, n_i$ and $j \in 0,1,\ldots, n_j$, then node $i,j$ is at longitude $\lambda_i$ and latitude $\phi_j$ where $\lambda_i=\lambda_0 + i \Delta \lambda$, $\phi_j=\phi_0 + j \Delta \phi$.Here, $\Delta \lambda$ and $\Delta \phi$ are grid spacings. In practice, these can be smooth functions of $i$ and $j$ respectively but here we treat them as constant.An example simple spherical grid is shown below. The red dots are the nodes of the mesh with positions $\lambda_i,\phi_j$. The dashed lines are the cell edges that for a regular net. Notice that in the Plate-Carrée projection the grid is regular because the grid-spacing in constant in longitude-latitude coordinates.The lengths and areas of the grid are measured on the surface of sphere. We defined the edges to be either lines of constant longitude or latitude. Using spherical geometry, the length of a meridionally-oriented (constant longitude) cell edge is $R \Delta \phi$. For a zonally-oriented edge at constant latitude $\phi_j$, the length is $R \Delta \lambda \cos \phi_j$. The area of a cell labelled $i+\frac{1}{2},j+\frac{1}{2}$ bounded by four edges is $R^2 \Delta \lambda \left( \sin \phi_{j+1} - \sin \phi_j \right)$.The metric factors for this grid are the same as for a Plate-Carrée projection because we are defining the paths of the cell edges to be straight in the Plate-Carrée projection. The use of the Plate-Carrée coordinates for position, namely longitude and latitude, is a happy coincidence which means everything, positions and metrics, are defined by one projection.
###Code
%run -i figure-scripts/simple-spherical-grid.py
###Output
_____no_output_____ |
doc/source/quickstart/6)_Volume_Rendering.ipynb | ###Markdown
If we want to apply a clipping, we can specify the `sigma_clip`. This will clip the upper bounds to this value times the standard deviation of the values in the image array.
###Code
sc.show(sigma_clip=4)
###Output
_____no_output_____
###Markdown
There are several other options we can specify. Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, 'kpc'))
source = sc.sources['source_00']
source.set_fields('density', no_ghost=False)
tf = yt.ColorTransferFunction((-28, -25))
tf.add_layers(4, w=0.03)
source.set_transfer_function(tf)
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
A Brief Demo of Volume RenderingThis shows a small amount of volume rendering. Really, just enough to get your feet wet!
###Code
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
###Output
_____no_output_____
###Markdown
To create a volume rendering, we need a camera and a transfer function. We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function. This means behavior for data outside these values is undefined.We then add on "layers" like an onion. This function can accept a width (here specified) in data units, and also a color map. Here we add on four layers.Finally, we create a camera. The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function. Once we've done that, we call `show` to actually cast our rays and display them inline.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, 'kpc'))
source = sc.sources['source_00']
tf = yt.ColorTransferFunction((-28, -24))
tf.add_layers(4, w=0.01)
source.set_transfer_function(tf)
sc.show()
###Output
_____no_output_____
###Markdown
A Brief Demo of Volume RenderingThis shows a small amount of volume rendering. Really, just enough to get your feet wet!
###Code
import yt
ds = yt.load_sample("IsolatedGalaxy")
###Output
_____no_output_____
###Markdown
To create a volume rendering, we need a camera and a transfer function. We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function. This means behavior for data outside these values is undefined.We then add on "layers" like an onion. This function can accept a width (here specified) in data units, and also a color map. Here we add on four layers.Finally, we create a camera. The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function. Once we've done that, we call `show` to actually cast our rays and display them inline.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, "kpc"))
source = sc.sources["source_00"]
tf = yt.ColorTransferFunction((-28, -24))
tf.add_layers(4, w=0.01)
source.set_transfer_function(tf)
sc.show()
###Output
_____no_output_____
###Markdown
If we want to apply a clipping, we can specify the `sigma_clip`. This will clip the upper bounds to this value times the standard deviation of the values in the image array.
###Code
sc.show(sigma_clip=4)
###Output
_____no_output_____
###Markdown
There are several other options we can specify. Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, "kpc"))
source = sc.sources["source_00"]
source.field = "density"
tf = yt.ColorTransferFunction((-28, -25))
tf.add_layers(4, w=0.03)
source.transfer_function = tf
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
A Brief Demo of Volume RenderingThis shows a small amount of volume rendering. Really, just enough to get your feet wet!
###Code
import yt
ds = yt.load_sample("IsolatedGalaxy")
###Output
_____no_output_____
###Markdown
To create a volume rendering, we need a camera and a transfer function. We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function. This means behavior for data outside these values is undefined.We then add on "layers" like an onion. This function can accept a width (here specified) in data units, and also a color map. Here we add on four layers.Finally, we create a camera. The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function. Once we've done that, we call `show` to actually cast our rays and display them inline.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, 'kpc'))
source = sc.sources['source_00']
tf = yt.ColorTransferFunction((-28, -24))
tf.add_layers(4, w=0.01)
source.set_transfer_function(tf)
sc.show()
###Output
_____no_output_____
###Markdown
If we want to apply a clipping, we can specify the `sigma_clip`. This will clip the upper bounds to this value times the standard deviation of the values in the image array.
###Code
sc.show(sigma_clip=4)
###Output
_____no_output_____
###Markdown
There are several other options we can specify. Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, 'kpc'))
source = sc.sources['source_00']
source.field = 'density'
tf = yt.ColorTransferFunction((-28, -25))
tf.add_layers(4, w=0.03)
source.transfer_function = tf
sc.show(sigma_clip=4.0)
###Output
_____no_output_____
###Markdown
A Brief Demo of Volume RenderingThis shows a small amount of volume rendering. Really, just enough to get your feet wet!
###Code
import yt
ds = yt.load("IsolatedGalaxy/galaxy0030/galaxy0030")
###Output
_____no_output_____
###Markdown
To create a volume rendering, we need a camera and a transfer function. We'll use the `ColorTransferFunction`, which accepts (in log space) the minimum and maximum bounds of our transfer function. This means behavior for data outside these values is undefined.We then add on "layers" like an onion. This function can accept a width (here specified) in data units, and also a color map. Here we add on four layers.Finally, we create a camera. The focal point is `[0.5, 0.5, 0.5]`, the width is 20 kpc (including front-to-back integration) and we specify a transfer function. Once we've done that, we call `show` to actually cast our rays and display them inline.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, 'kpc'))
source = sc.sources['source_00']
tf = yt.ColorTransferFunction((-28, -24))
tf.add_layers(4, w=0.01)
source.set_transfer_function(tf)
sc.show()
###Output
_____no_output_____
###Markdown
If we want to apply a clipping, we can specify the `sigma_clip`. This will clip the upper bounds to this value times the standard deviation of the values in the image array.
###Code
sc.show(sigma_clip=4)
###Output
_____no_output_____
###Markdown
There are several other options we can specify. Note that here we have turned on the use of ghost zones, shortened the data interval for the transfer function, and widened our gaussian layers.
###Code
sc = yt.create_scene(ds)
sc.camera.set_width(ds.quan(20, 'kpc'))
source = sc.sources['source_00']
source.field = 'density'
tf = yt.ColorTransferFunction((-28, -25))
tf.add_layers(4, w=0.03)
source.transfer_function = tf
sc.show(sigma_clip=4.0)
###Output
_____no_output_____ |
notebooks/single_window/.ipynb_checkpoints/1_make_noh_avg_fit_dcd-checkpoint.ipynb | ###Markdown
Step 1: Initialize
###Code
host = 'a_tract_21mer'
type_na = 'bdna+bdna'
n_bp = 21
begin_frame = 1
frame_num = 50000
agent = avg_dcd_noh.AvgcrddcdAgent(host, type_na, rootfolder)
###Output
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer exists
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer/bdna+bdna exists
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer/bdna+bdna/input exists
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer/bdna+bdna/input/allatoms exists
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer/bdna+bdna/input/heavyatoms exists
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer/bdna+bdna/charmm_inp exists
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer/bdna+bdna/charmm_dat exists
/home/yizaochen/codes/dna_rna/all_systems/a_tract_21mer/bdna+bdna/make_crd exists
###Markdown
Copy central.xtc from simulation folder
###Code
xtc_0us_5us = path.join(simu_folder, host, type_na, 'data', 'roughtrj', '1000', f'{type_na}.nopbc.fit.1to50.1000.xtc')
central_xtc = path.join(agent.aa_folder, f'{type_na}.central.xtc')
copyfile(xtc_0us_5us, central_xtc)
print(f'cp {xtc_0us_5us} {central_xtc}')
###Output
cp /home/ytcdata/simulation/tat_21mer/bdna+bdna/data/roughtrj/1000/bdna+bdna.nopbc.fit.1to50.1000.xtc /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/allatoms/bdna+bdna.central.xtc
###Markdown
Step 2: Prepare dcd and pdb
###Code
inish = path.join(na_mechfolder, 'shell_scripts', 'initialize_input.sh')
cmd = f'bash {inish} {rootfolder} {host} {type_na}'
print(cmd)
cmd = f'r 1-{n_bp}'
print(cmd)
# Manual Delete
aafolder = agent.aa_folder
single_na = d_single_na[type_na]
temp_pdb = path.join(aafolder, f'{single_na}1.central.pdb')
cmd = f'vim {temp_pdb}'
print(cmd)
temp_pdb = path.join(aafolder, f'{single_na}2.central.pdb')
cmd = f'vim {temp_pdb}'
print(cmd)
pdbcharmmsh = path.join(na_mechfolder, 'shell_scripts', 'pdb_gro2charmm.sh')
cmd = f'bash {pdbcharmmsh} {rootfolder} {host} {type_na} 1'
print(cmd)
cmd = f'bash {pdbcharmmsh} {rootfolder} {host} {type_na} 2'
print(cmd)
temp_pdb = path.join(agent.aa_folder, f'{type_na}.central.pdb')
temp_xtc = path.join(agent.aa_folder, f'{type_na}.central.xtc')
temp_dcd = path.join(agent.aa_folder, f'{type_na}.central.dcd')
cmd = f'{vmd} {temp_pdb} {temp_xtc}'
print(cmd)
cmd = f'animate write dcd {temp_dcd} beg 1 end 50001 waitfor all'
print(cmd)
cmd = f'{vmd} {temp_pdb} {temp_dcd}'
print(cmd)
###Output
/usr/local/bin/vmd /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/allatoms/bdna+bdna.central.pdb /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/allatoms/bdna+bdna.central.xtc
animate write dcd /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/allatoms/bdna+bdna.central.dcd beg 1 end 50001 waitfor all
/usr/local/bin/vmd /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/allatoms/bdna+bdna.central.pdb /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/allatoms/bdna+bdna.central.dcd
###Markdown
Step 3: Make CRD (split two strands, then combine)
###Code
agent.make_crd_input(amber=True, firstter='amber_5ter', lastter='amber_3ter')
agent.make_crd()
# Reset Resid for bdna2.1.pdb, if need
execute = False
if execute:
offset = -21
agent.reset_na2_pdb_resid(offset)
###Output
/home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/make_crd/bdna2.1.pdb /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/make_crd/bdna2.1.backup.pdb
Write PDB: /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/make_crd/bdna2.1.pdb
Reset /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/make_crd/bdna2.1.pdb resid by offset -21!
Check by...
vim /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/make_crd/bdna2.1.pdb
###Markdown
Step 4: Make CRD and DCD without hydrogen atoms
###Code
agent.make_no_h_crd_input(amber=True, firstter='amber_5ter', lastter='amber_3ter')
agent.make_no_h_crd()
agent.make_no_h_dcd_input(amber=True, begin=begin_frame, frame_num=frame_num, firstter='amber_5ter', lastter='amber_3ter')
agent.make_no_h_dcd()
###Output
charmm< /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/charmm_inp/write_no_h_dcd.inp > /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/charmm_dat/write_no_h_dcd.dat
###Markdown
Step 5: Make Average CRD and fitting no-H dcd to average crd
###Code
agent.make_avg_crd_input(amber=True, firstter='amber_5ter', lastter='amber_3ter')
agent.make_avg_crd()
agent.fit_dcd_to_avg_input(amber=True, begin=begin_frame, frame_num=frame_num, firstter='amber_5ter', lastter='amber_3ter')
agent.fit_dcd_to_avg()
###Output
charmm< /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/charmm_inp/fit_dcd_to_avg.inp > /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/charmm_dat/fit_dcd_to_avg.dat
###Markdown
Step 6: Check By VMD
###Code
crd = path.join(agent.heavy_folder, f'{type_na}.nohydrogen.avg.crd')
dcd = path.join(agent.heavy_folder, f'{type_na}.nohydrogen.fitavg.dcd')
print(f'vmd -cor {crd} {dcd}')
###Output
vmd -cor /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/heavyatoms/bdna+bdna.nohydrogen.avg.crd /home/yizaochen/codes/dna_rna/all_systems/tat_21mer/bdna+bdna/input/heavyatoms/bdna+bdna.nohydrogen.fitavg.dcd
###Markdown
Additional Part: Copy requried files to allsystem folder
###Code
# Copy From simulation folder
simu_folder = '/home/yizaochen/simulation'
simu_datafolder = path.join(simu_folder, host, type_na, 'data')
inputfolder = path.join(rootfolder, host, type_na, 'input', 'allatoms')
old_f = path.join(simu_datafolder, 'gro', f'{type_na}.npt4.fit.gro')
new_f = path.join(inputfolder, f'{type_na}.npt4.all.gro')
copyfile(old_f, new_f)
print(f'cp {old_f} {new_f}')
old_f = path.join(simu_datafolder, 'roughtrj', '1000', f'{type_na}.nopbc.fit.1to10.1000.xtc')
new_f = path.join(inputfolder, f'{type_na}.all.xtc')
copyfile(old_f, new_f)
print(f'cp {old_f} {new_f}')
old_f = path.join(simu_folder, host, type_na, f'{type_na}.gro')
new_f = path.join(inputfolder, f'{type_na}.perfect.gro')
copyfile(old_f, new_f)
print(f'cp {old_f} {new_f}')
###Output
cp /home/yizaochen/simulation/gcgc_21mer/bdna+bdna/data/gro/bdna+bdna.npt4.fit.gro /home/yizaochen/codes/dna_rna/all_systems/gcgc_21mer/bdna+bdna/input/allatoms/bdna+bdna.npt4.all.gro
cp /home/yizaochen/simulation/gcgc_21mer/bdna+bdna/data/roughtrj/1000/bdna+bdna.nopbc.fit.1to10.1000.xtc /home/yizaochen/codes/dna_rna/all_systems/gcgc_21mer/bdna+bdna/input/allatoms/bdna+bdna.all.xtc
cp /home/yizaochen/simulation/gcgc_21mer/bdna+bdna/bdna+bdna.gro /home/yizaochen/codes/dna_rna/all_systems/gcgc_21mer/bdna+bdna/input/allatoms/bdna+bdna.perfect.gro
###Markdown
Reload Function
###Code
from imp import reload
reload(avg_dcd_noh)
###Output
_____no_output_____ |
model_pipeline/07_process_model_software_engineer_over_time.ipynb | ###Markdown
Model Parameters
###Code
parameters = {
"min_salary_records":100, # Filter out all jobs with less than specified salary records
"min_job_summaries":1000, # Filter out all jobs with less than specified job summaries
"min_ngram":2, # For TD-IDF vectorizer
"max_ngram":4, # For TD-IDF vectorizer
"min_df":0, # For TD-IDF vectorizer, ignore features in less than this number of documents
"train_test_split":0.05, # For train-test split
"random_state":1, # For train-test split
"alpha":0.1, # For Naive Bayes model
"num_skills":50, # Number of skill to show per job
}
###Output
_____no_output_____
###Markdown
Load Job Summaries
###Code
# Load resume data
data = pd.read_csv(directory+'02_resumes_work.csv')
data = data[data.cleaned_job_title == 'software engineer']
# Remove duplicate data
data = data[['cleaned_job_title','descript','from_year']].drop_duplicates()
data['range'] = 'none'
data.loc[data.from_year >= 2013, 'range'] = '2013-2018'
data.loc[(data.from_year >= 2008) & (data.from_year < 2013), 'range'] = '2008-2013'
data.loc[(data.from_year >= 2003) & (data.from_year < 2008), 'range'] = '2003-2008'
data.loc[(data.from_year >= 1998) & (data.from_year < 2003), 'range'] = '1998-2003'
data.loc[data.from_year < 1998, 'range'] = '1900-1998'
data.groupby('range').count()
###Output
_____no_output_____
###Markdown
Data Preprocess Unbalanced Classes
###Code
# Down sample the first model
# SMOTE up sample the second model
# Run the model on different periods of time (just for software engineers)
x_data = preprocess_list(data.descript)
y_labels = data.range
# Split the data into test and train datasets
X_train, X_test, y_train, y_test = train_test_split(x_data,
y_labels,
test_size=parameters['train_test_split'],
random_state=parameters['random_state'])
print("X_train: ",len(X_train))
print("X_test: ",len(X_test))
print("Start:", datetime.datetime.now())
# Train TF-IDF vectorizer model
vect = TfidfVectorizer(min_df=parameters['min_df'],
ngram_range=(parameters['min_ngram'], parameters['max_ngram'])
).fit(X_train)
X_train_vectorized = vect.transform(X_train)
print("End:", datetime.datetime.now())
print('Vocabulary len:', len(vect.get_feature_names()))
sm = SMOTE(kind='regular')
X_res, y_res = sm.fit_sample(X_train_vectorized, y_train)
temp_display = pd.DataFrame(y_res)
temp_display.columns = ['range']
temp_display['counter'] = 1
temp_display.groupby('range').count().reset_index()
###Output
_____no_output_____
###Markdown
Train Model
###Code
# Train Multinomial Naive Bayes model
model = MultinomialNB(alpha=parameters['alpha'])
model.fit(X_res, y_res)
y_pred = model.predict(vect.transform(X_test))
print('Accuracy: %.2f%%' % (accuracy_score(y_test, y_pred) * 100))
# predictions = pd.DataFrame(list(zip(y_test, y_pred)))
# predictions.columns=['actual','prediction']
# predictions['count']=1
# predictions.groupby(['actual','prediction']).count().reset_index().to_csv('most_confusion.csv')
print('f1_score: ', f1_score(y_test, y_pred, average="macro"))
print('precision_score: ', precision_score(y_test, y_pred, average="macro"))
print('recall_score: ', recall_score(y_test, y_pred, average="macro"))
precision, recall, fscore, support = score(y_test, y_pred)
'{:.1%}'.format(1/3.0)
metrics = pd.DataFrame(list(zip(model.classes_, precision, recall, fscore, support)))
metrics.columns = ['class','precision', 'recall', 'fscore', 'support']
metrics_samples = metrics.sort_values(by='fscore',ascending=False).head(5)
metrics_samples.precision = metrics_samples.precision.map(lambda x: '{:.2%}'.format(x))
metrics_samples.recall = metrics_samples.recall.map(lambda x: '{:.2%}'.format(x))
metrics_samples.fscore = metrics_samples.fscore.map(lambda x: '{:.2%}'.format(x))
metrics_samples.sort_values(by='fscore',ascending=True).to_csv('temp.csv')
metrics_samples
###Output
_____no_output_____
###Markdown
List Most Relevant Skills
###Code
# This code finds the top parameters['num_skills'] of features to show the user. It filters out any
# ngram where the same n-1 version of the ngram is shown. This cuts down on repetition.
label_id = 4
print(model.classes_[label_id])
print('-------')
features_list = []
topn_class1 = sorted(zip(model.coef_[label_id], vect.get_feature_names()))[-parameters['num_skills']:]
for coef, feat in topn_class1:
features_list.append(feat)
accepted_skill_list = [model.classes_[label_id]]
for potential_skill in sorted(features_list, key=lambda x: -len(x.split())):
highest_match = len(potential_skill.split())
for accepted_skill in accepted_skill_list:
leftovers = list(set(potential_skill.split()) - set(accepted_skill.split()))
if len(leftovers) < highest_match:
highest_match = len(leftovers)
if highest_match > 1:
accepted_skill_list.append(potential_skill)
accepted_skill_list = accepted_skill_list[1:]
shuffle(accepted_skill_list)
for skill in accepted_skill_list:
print(skill)
###Output
2013-2018
-------
version control
unit testing
web application using
visual studio
full stack
develop maintain
store procedure
technology use
new feature
code review
agile scrum
rest api
design implement
using asp net
entity framework
management system
development team
software engineer
sql server
ruby rail
front end
test case
continuous integration
html5 css3
html cs javascript
user interface
###Markdown
Save New Model
###Code
# This code saves the model to the models folder
save_time = re.sub('[^A-Za-z0-9]+', '', str(datetime.datetime.now()))
print(save_time)
write_param = open(directory+"models/" + save_time + '_parameters.txt','w')
for key in parameters:
write_param.write(key + "=" + str(parameters[key]) + '\n')
write_param.close()
# Save preprocessed x data
pickling_on = open(directory+"models/"+save_time+"_x_data.pkl","wb")
pickle.dump(x_data, pickling_on)
pickling_on.close()
# Save preprocessed y labels
pickling_on = open(directory+"models/"+save_time+"_y_labels.pkl","wb")
pickle.dump(y_labels, pickling_on)
pickling_on.close()
# Save preprocessed x SMOTE data
pickling_on = open(directory+"models/"+save_time+"_x_SMOTE_data.pkl","wb")
pickle.dump(X_res, pickling_on)
pickling_on.close()
# Save preprocessed y SMOTE labels
pickling_on = open(directory+"models/"+save_time+"_y_SMOTE_labels.pkl","wb")
pickle.dump(y_res, pickling_on)
pickling_on.close()
# Save TD-IDF vectorizer
pickling_on = open(directory+"models/"+save_time+"_tdidf_vect.pkl","wb")
pickle.dump(vect, pickling_on)
pickling_on.close()
# Save vectorized x_train
pickling_on = open(directory+"models/"+save_time+"_x_trained_tdidf_vect.pkl","wb")
pickle.dump(X_train_vectorized, pickling_on)
pickling_on.close()
# Save NB model
pickling_on = open(directory+"models/"+save_time+"_nb_model.pkl","wb")
pickle.dump(model, pickling_on)
pickling_on.close()
###Output
20180718171406220007
###Markdown
Load Model
###Code
# This code loads an old model
save_time = '20180718171406220007' # for software_engineer
pickling_on = open(directory+"models/"+save_time+"_x_data.pkl","rb")
x_data = pickle.load(pickling_on)
pickling_on.close()
# Save preprocessed y labels
pickling_on = open(directory+"models/"+save_time+"_y_labels.pkl","rb")
y_labels = pickle.load(pickling_on)
pickling_on.close()
# Save TD-IDF vectorizer
pickling_on = open(directory+"models/"+save_time+"_tdidf_vect.pkl","rb")
vect = pickle.load(pickling_on)
pickling_on.close()
# Save vectorized x_train
pickling_on = open(directory+"models/"+save_time+"_x_trained_tdidf_vect.pkl","rb")
X_train_vectorized = pickle.load(pickling_on)
pickling_on.close()
# Save NB model
pickling_on = open(directory+"models/"+save_time+"_nb_model.pkl","rb")
model = pickle.load(pickling_on)
pickling_on.close()
###Output
/Users/kwheatley/anaconda/envs/python36/lib/python3.6/site-packages/sklearn/base.py:312: UserWarning: Trying to unpickle estimator TfidfTransformer from version 0.18.1 when using version 0.19.0. This might lead to breaking code or invalid results. Use at your own risk.
UserWarning)
|
notebooks-spanish/04-entrenando_y_generalizando.ipynb | ###Markdown
Entrenamiento y test===========Para evaluar que tal generalizan nuestros modelos supervisados, podemos dividir los datos en un conjunto de entrenamiento y otro de test:
###Code
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
###Output
_____no_output_____
###Markdown
Si pensamos la forma en que normalmente se aplica el aprendizaje automático, la idea de una partición de entrenamiento y test tiene sentido. Los sistemas del mundo real se entrenan utilizando los datos de los que se dispone y, conforme otros datos llegan (de nuevos clientes, de otros sensores o de otras fuentes), el modelo que fue previamente entrenado debe predecir *nuevos* datos. Podemos simular esto durante el aprendizaje mediante una partición train/test -- los datos de test serán una simulación de "datos futuros" que vendrán al sistema en la etapa de producción.Específicamente para iris, las 150 etiquetas están ordenadas, lo que significa que si dividimos los datos de forma directa y proporcional, alteraremos la distribución de las clases. Por ejemplo, si realizaremos una partición bastante común consistente en 2/3 para entrenamiento y 1/3 para test, nuestros datos de entrenamiento solo tendrían flores de las clases 0 y 1 (Setosa and Versicolor), y nuestros datos de test solo tendrían flores de la clase 2 (Virginica).Bajo la suposición de que todos los ejemplos son independientes entre si (que no puede hacerse con datos de series temporales), sería necesario **barajar aleatoriamente** el dataset antes de dividirlo. Ahora tenemos que hacer la partición. Afortunadamente, esto es bastante común en aprendizaje automático y scikit-learn tiene una función ya implementada para dividir en entrenamiento y test. Vamos a utilizar el 50% de los datos para entrenamiento y el 50% restante para test. Un 80% y un 20% es otra opción bastante común, aunque realmente depende mucho de los problemas tratados. Lo más importante para realizar una evaluación justa es que **la evaluación se haga utilizando datos que no han sido utilizados para el entrenamiento**.
###Code
y
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123)
print("Etiquetas para los datos de entrenamiento y test")
print(train_y)
print(test_y)
###Output
_____no_output_____
###Markdown
**Consejo: partición estratificada**Especialmente cuando tratamos conjuntos de datos relativamente pequeños, es mejor estratificar la partición. La estratificación significa que mantenemos la proporción de datos por clase que había originalmente en los subconjuntos generados. Por ejemplo, después de dividir aleatoriamente el dataset como hicimos en el ejemplo anterior, podemos comprobar que tenemos las siguientes proporciones por clase:
###Code
print('Todos:', np.bincount(y) / float(len(y)) * 100.0)
print('Entrenamiento:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
###Output
_____no_output_____
###Markdown
Para conseguir realizar una partición estratificada, tenemos que incluir el array de etiquetas cuando invocamos a la función `train_test_split`:
###Code
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123,
stratify=y)
print('Todos:', np.bincount(y) / float(len(y)) * 100.0)
print('Entrenamiento:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
###Output
_____no_output_____
###Markdown
--- Si evaluamos el rendimiento de nuestro clasificador con datos que se han empleado para el entrenamiento, podríamos llegar a unos resultados demasiado optimistas. En el peor caso, el modelo puede simplemente memorizar los datos de entrenamiento, pero fallar estrepitosamente cuando tenga que clasificar nuevos datos similares - nunca querríamos tener un sistema así en producción.En lugar de usar el mismo dataset para entrenamiento y test (lo que se conoce como "evaluación por resubstitución"), es mucho mejor usar una partición de entrenamiento y test para así estimar como de bien se comporta el modelo entrenado con datos nuevos.
###Code
classifier.fit(train_X, train_y)
pred_y = classifier.predict(test_X)
print("CCR [Accuracy]:")
print(np.mean(pred_y == test_y))
###Output
_____no_output_____
###Markdown
Podemos visualizar los aciertos y los fallos:
###Code
print('Ejemplos correctamente clasificados:')
correct_idx = np.where(pred_y == test_y)[0]
print(correct_idx)
print('\nEjemplos incorrectamente clasificados:')
incorrect_idx = np.where(pred_y != test_y)[0]
print(incorrect_idx)
# Representar en 2D
colors = ["darkblue", "darkgreen", "gray"]
for n, color in enumerate(colors):
idx = np.where(test_y == n)[0]
plt.scatter(test_X[idx, 1], test_X[idx, 2], color=color, label="Clase %s" % str(n))
plt.scatter(test_X[incorrect_idx, 1], test_X[incorrect_idx, 2], color="darkred")
plt.xlabel('sepal width [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc=3)
plt.title("Resultados de clasificación en iris con KNN")
plt.show()
###Output
_____no_output_____
###Markdown
Entrenamiento y test===========Para evaluar que tal generalizan nuestros modelos supervisados, podemos dividir los datos en un conjunto de entrenamiento y otro de test:
###Code
from sklearn.datasets import load_iris
from sklearn.neighbors import KNeighborsClassifier
iris = load_iris()
X, y = iris.data, iris.target
classifier = KNeighborsClassifier()
###Output
_____no_output_____
###Markdown
Si pensamos la forma en que normalmente se aplica el aprendizaje automático, la idea de una partición de entrenamiento y test tiene sentido. Los sistemas del mundo real se entrenan utilizando los datos de los que se dispone y, conforme otros datos llegan (de nuevos clientes, de otros sensores o de otras fuentes), el modelo que fue previamente entrenado debe predecir *nuevos* datos. Podemos simular esto durante el aprendizaje mediante una partición train/test -- los datos de test serán una simulación de "datos futuros" que vendrán al sistema en la etapa de producción.Específicamente para iris, las 150 etiquetas están ordenadas, lo que significa que si dividimos los datos de forma directa y proporcional, alteraremos la distribución de las clases. Por ejemplo, si realizaremos una partición bastante común consistente en 2/3 para entrenamiento y 1/3 para test, nuestros datos de entrenamiento solo tendrían flores de las clases 0 y 1 (Setosa and Versicolor), y nuestros datos de test solo tendrían flores de la clase 2 (Virginica).Bajo la suposición de que todos los ejemplos son independientes entre si (que no puede hacerse con datos de series temporales), sería necesario **barajar aleatoriamente** el dataset antes de dividirlo. Ahora tenemos que hacer la partición. Afortunadamente, esto es bastante común en aprendizaje automático y scikit-learn tiene una función ya implementada para dividir en entrenamiento y test. Vamos a utilizar el 50% de los datos para entrenamiento y el 50% restante para test. Un 80% y un 20% es otra opción bastante común, aunque realmente depende mucho de los problemas tratados. Lo más importante para realizar una evaluación justa es que **la evaluación se haga utilizando datos que no han sido utilizados para el entrenamiento**.
###Code
y
from sklearn.model_selection import train_test_split
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123)
print("Etiquetas para los datos de entrenamiento y test")
print(train_y)
print(test_y)
###Output
_____no_output_____
###Markdown
**Consejo: partición estratificada**Especialmente cuando tratamos conjuntos de datos relativamente pequeños, es mejor estratificar la partición. La estratificación significa que mantenemos la proporción de datos por clase que había originalmente en los subconjuntos generados. Por ejemplo, después de dividir aleatoriamente el dataset como hicimos en el ejemplo anterior, podemos comprobar que tenemos las siguientes proporciones por clase:
###Code
print('Todos:', np.bincount(y) / float(len(y)) * 100.0)
print('Entrenamiento:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
###Output
_____no_output_____
###Markdown
Para conseguir realizar una partición estratificada, tenemos que incluir el array de etiquetas cuando invocamos a la función `train_test_split`:
###Code
train_X, test_X, train_y, test_y = train_test_split(X, y,
train_size=0.5,
test_size=0.5,
random_state=123,
stratify=y)
print('Todos:', np.bincount(y) / float(len(y)) * 100.0)
print('Entrenamiento:', np.bincount(train_y) / float(len(train_y)) * 100.0)
print('Test:', np.bincount(test_y) / float(len(test_y)) * 100.0)
###Output
_____no_output_____
###Markdown
--- Si evaluamos el rendimiento de nuestro clasificador con datos que se han empleado para el entrenamiento, podríamos llegar a unos resultados demasiado optimistas. En el peor caso, el modelo puede simplemente memorizar los datos de entrenamiento, pero fallar estrepitosamente cuando tenga que clasificar nuevos datos similares - nunca querríamos tener un sistema así en producción.En lugar de usar el mismo dataset para entrenamiento y test (lo que se conoce como "evaluación por resubstitución"), es mucho mejor usar una partición de entrenamiento y test para así estimar como de bien se comporta el modelo entrenado con datos nuevos.
###Code
classifier.fit(train_X, train_y)
pred_y = classifier.predict(test_X)
print("CCR [Accuracy]:")
print(np.mean(pred_y == test_y))
###Output
_____no_output_____
###Markdown
Podemos visualizar los aciertos y los fallos:
###Code
print('Ejemplos correctamente clasificados:')
correct_idx = np.where(pred_y == test_y)[0]
print(correct_idx)
print('\nEjemplos incorrectamente clasificados:')
incorrect_idx = np.where(pred_y != test_y)[0]
print(incorrect_idx)
# Representar en 2D
colors = ["darkblue", "darkgreen", "gray"]
for n, color in enumerate(colors):
idx = np.where(test_y == n)[0]
plt.scatter(test_X[idx, 1], test_X[idx, 2], color=color, label="Clase %s" % str(n))
plt.scatter(test_X[incorrect_idx, 1], test_X[incorrect_idx, 2], color="darkred")
plt.xlabel('sepal width [cm]')
plt.ylabel('petal length [cm]')
plt.legend(loc=3)
plt.title("Resultados de clasificación en iris con KNN")
plt.show()
###Output
_____no_output_____ |
Python/Exercise_3_DrugInteractions.ipynb | ###Markdown
FHIR for Research Workshop - Exercise 3 Learning Objectives and Key ConceptsIn this exercise, you will: - Apply Knowledge from Exercises 0, 1, and 2- Attempt to complete each activity on your own individually- Query active Prescriptions in our Patient cohort- Understand the (non-FHIR) Drug-on-Drug Interaction API and learn how to query it- Combine the FHIR data with the non-FHIR API to determine Drug-on-Drug Interactions. Drug on Drug InteractionsFor this exercise we will explore potential drug on drug interactions in a sizable patient cohort stored in FHIR combined with drug interaction data from the NIH's Drug RxNAV database. Motivation/PurposeFrom a research persective we can envision leveraging these sorts of analyses to do post-market surveillance of drugs to determine both the rate of known adverse events among patients, as well as to potentially flag additional risks not yet identified. From a clinical perspective, this exercise demonstrates the ability for third-party data (in this case Drug on Drug interaction data), can be pulled in, paired with FHIR formatted clinical data, and then leveraged to better inform patient care in the form of Clinical Decision Support tools. Icons in this Guide 📘 A link to a useful external reference related to the section the icon appears in 🖐 A hands-on section where you will code something or interact with the server Step 1: Query all active prescriptions in our patient cohortFor this exercise we will call on the `MedicationRequest` which represents a medication prescription in FHIR.📘[Read more about the MedicationRequest Resource](https://www.hl7.org/fhir/medicationrequest.html) Each `MedicationRequest` represents a single prescription, such that you may have a many-to-one relationship between MedicationRequests and patients, as it is often the case that patients will have multiple prescriptions.(This fact will be critical for our exercise, as determining a potential drug on drug interaction will require effectively grouping `MedicationRequest` resources by patient, to determine if the patient is on multiple concurrent prescriptions. We will therefore want to make sure we can include the relevant patient information to ensure we can map multiple prescriptions to individual patients.)
###Code
# Python standard library API for dealing with json
import json
# For making HTTP requests to the FHIR server
import requests
# Python data analysis library
import pandas as pd
FHIR_SERVER = 'https://api.logicahealth.org/researchonfhir/open'
# Configure requests session with standard headers
s = requests.Session()
s.headers.update({'Accept':'application/fhir+json', 'Content-Type': 'application/fhir+json'})
# Optional: Turn off SSL verification. Useful when dealing with a corporate proxy with self-signed certificates.
s.verify = False
requests.packages.urllib3.disable_warnings()
###Output
_____no_output_____
###Markdown
Compose the FHIR queryFirst compose a query to pull the `MedicationRequest` resource from the FHIR server. Then convert it to JSON format. Optionally, you could output the resulting JSON file to confirm that you've successfully queried the database.🖐 Fill in the URL to for retrieving `MedicationRequest` Resources
###Code
r = s.get(f"{FHIR_SERVER}/FILLMEIN")
bundle = r.json()
###Output
_____no_output_____
###Markdown
We can now leverage the methods we deployed previously in Exercises 1 and 2 to create a python list which contains only the Bundle.entry.resource elements from the bundle returned in the previous step. As a first step let's leverage the list mapping lambda function we deployed in Exercise 2 Section 1.1 ([Link to Exercise 2 solution here](https://github.com/mitre/fhir-exercises/blob/main/Solutions/Exercise_2_KidsFirst-SOLUTION.ipynb) for reference) to map out our JSON file (entering the entire bundle, and mapping by resource) As a sanity test let's return the first resource item (index 0 or [0]) so we can get a better look at what information we have to work with.🖐 Create a list of just the patient resources from the entries in the bundle
###Code
prescriptions = # To be completed...
prescriptions[0]
###Output
_____no_output_____
###Markdown
Convert Data onto a Pandas Dataframe Now that we've confirmed that we've extracted information we need from our FHIR server, we will then take the FHIR formatted data and convert it into a pandas dataframe for subsequent analysis.Based on our previous exercises we know we can use the `json_normalize` function parse the JSON into a pandas dataframe. 📘[Read more about `pandas.json_normalize`](https://pandas.pydata.org/docs/reference/api/pandas.json_normalize.html)You may want to use the `max_level` argument with `json_normalize` to cap the number of levels for the function to parse (in previous exercises we set the number = 10)Let's do that now and then output the resulting dataframe to confirm we've successfully converted it.🖐 Convert your json file into a pandas dataframe
###Code
pd.set_option('display.max_columns', None)
df_prescriptions = # 🖐 Fill in this code...
df_prescriptions.head()
###Output
_____no_output_____
###Markdown
Depending on how you've parsed it, certain fields are immediately usable in their current form. For others, we're going to need to do further work to parse out the precise information we want to work with. For now though, we'll pause any additional feature engineering until we have a better sense of precisely what we'll want to use. So we now have a basic datafame with drug and patient information. Let's examine the drug interaction api to see what data we'll need to extract from our dataframe Step 2: Understanding the Drug API and using that API with FHIR data📘[Review the NIH's RXNav API documentation](https://lhncbc.nlm.nih.gov/RxNav/APIs/index.html)We see one clear option we have to use the six-digit RxNorm identifier code to query for drug interactions📘[Review RXNav API findInteractionsFromList API documentation](https://lhncbc.nlm.nih.gov/RxNav/APIs/api-Interaction.findInteractionsFromList.html)This correlates with our Patient data column: `resource.medicationCodeableConcept.coding.codes` (quite a mouthful! But we'll deal with that shortly).Let's pull two sample interactions using the following general notation:`https://rxnav.nlm.nih.gov/REST/interaction/list.json?rxcuis=[code 1]+[code 2]`Two combinations we can try are: - 207106 and 656659 - 762675 and 859258 🖐 For each drug combination call the API and display the JSON response
###Code
url = '' # Solve for for 207106 and 656659
response = s.get(url, headers={'accept': 'application/json'})
response.json()
url = '' # Solve for for 762675 and 859258
response = s.get(url, headers={'accept': 'application/json'})
response.json()
###Output
_____no_output_____
###Markdown
Feel free to experiment with additional drug combinations, including 3 or more drugs to see how the information varies.Reviewing the returned output we can begin to analyze the information provided, and assess our approach. Why are the RxNorm codes in the interactionPair different than what we sent? Do we need to care about severity? What other elements could be present and where are the descriptions indicating what each element represents (hint should we be looking back at the API docs to interpret?)Taking stock, we have successfully accessed the Drug API, and hopefully now have an understanding of what the API returns when there is a drug interaction versus when there isn't.We now have important information informing our next steps. First, we have a structured target to work toward for submitting our patient data to the Drug API. For each patient, we will need to compile a list of RxNorm codes of the prescriptions they are on, and then append them to our API query with a `+` or `%20` between each code. For our next step we'll go about constructing that!Second, we have an understanding of how the Drug API returns a known interaction, versus how it returns when there isn't one. We can begin to consider how the format of this data can be used to indicate - in bulk - the presence or absence of a reaction. Step 3: Construct a composite list of all drugs per-patient (so we can determine a potential Drug on Drug interactionSo now we know that in order to engage our RxNorm server we need to extract and submit our patient's six digit RxNorm code, let's go back to our original mapped JSON data and try to do a list comprehension to extract just the RxNorm specific codes first.
###Code
pd.DataFrame([codings['code'] for MedicationRequest in prescriptions for codings in MedicationRequest['medicationCodeableConcept']['coding'] if codings['system'] == 'http://www.nlm.nih.gov/research/umls/rxnorm']).head()
###Output
_____no_output_____
###Markdown
If we look back at our `df_prescriptions` DataFrame we can see that we already have a column for `medicationCodeableConcept.coding`. Let's take a similar approach to what we did in exercise 2 to extend what we just did and write a function to extract rxnorm codes based on just the coding. From there we can apply it to the DataFrame to generate a new column with just the rxnorm code.
###Code
def get_rx_norm_code(medication_codeable_concept_coding):
# Bonus points for error checking for medicationReference!
return next(coding['code'] for coding in medication_codeable_concept_coding if coding['system'] == 'http://www.nlm.nih.gov/research/umls/rxnorm')
get_rx_norm_code(prescriptions[0]['medicationCodeableConcept']['coding'])
rxcodes = pd.Series([codings['code'] for MedicationRequest in prescriptions for codings in MedicationRequest['medicationCodeableConcept']['coding']], name='rxcode')
dfcode = rxcodes.to_frame()
dfcode.head()
###Output
_____no_output_____
###Markdown
Let's now consolidate our dataframe to retain the information we need. Specifically we'll need information identifying the patient, an indication on whether or not the prescription is active or not (as only active prescriptions could cause a drug interaction, and finally the RXCUI code we previously extracted. Construct your final dataframe and then output the result to confirm you've retained the desired information.
###Code
# `.apply` Approach
df_prescriptions['rxcode'] = df_prescriptions['medicationCodeableConcept.coding'].apply(get_rx_norm_code)
# `.map` Approach
#df_prescriptions['rxcode'] = df_prescriptions['medicationCodeableConcept.coding'].map(lambda c: get_rx_norm_code(c))
rx_df = df_prescriptions[['subject.reference', 'status', 'rxcode']]
rx_df.head()
###Output
_____no_output_____
###Markdown
Filter data to only include active prescriptionsWe want to ensure that we're only querying active prescriptions. If a patient is no longer taking a drug, the risk of a Drug-on-Drug interaction is no longer applicable. If any inactive prescriptions are present, then filter your dataframe to ensure that only active prescrptions are included. We can do this by calling the Pandas `value_counts()` method on the column to determine what statuses are present and in what numbers, or even simply calling the pandas `unique()` method to determine the presence of any inactive prescriptions in our dataframe. 📘[Review the Pandas value_counts() documentation](https://pandas.pydata.org/docs/reference/api/pandas.Series.value_counts.html)📘[Review Pandas unique() documentation](https://pandas.pydata.org/docs/reference/api/pandas.Series.unique.html) 🖐 Using either of these methods, confirm that there are only active prescriptions in your data frame. It is worth noting that a more complex analysis might also take `MedicationRequest.dispenseRequest.initialFill`, `MedicationRequest.dispenseRequest.dispenseInterval`, and `MedicationRequest.dispenseRequest.validityPeriod` into account since an active medication may also define a `MedicationRequest` for the future which has yet to be prescribed and some medications may not be taken concurrently. Merge our prescriptions into a list by patientWe now need to create a list of drug codes for each patient, in order to feed that list into the RXNav API. Our desired output will look something like this where we have a tuple-like structure of patient ID, and a list of codes:![Screen Shot 2022-02-18 at 3.53.35 PM.png](attachment:5dcc492d-0e29-4a6e-95a6-44c5ed7480c9.png)Hint: to accomplish this try modifying the groupby function to merge our drugs by patient, and then apply a lambda function, to append the code values to a list.📘[Review Pandas groupby() documentation](https://pandas.pydata.org/docs/reference/api/pandas.DataFrame.groupby.html) 🖐 Fill in the appropriate columns to create a list of RxNorm Codes for each patient.
###Code
# HINT: What column do we want to group by? and which column values do we want as a list?
groups_by_patient = rx_df.groupby('FILLMEIN', sort=False)['FILLMEIN'].apply(list)
groups_by_patient = pd.DataFrame({'patient':groups_by_patient.index, 'rxcode_list':groups_by_patient.values})
groups_by_patient.head()
###Output
_____no_output_____
###Markdown
So now we've generated a list of active prescriptions for each patient, we can append this list to the RXNav query and determine whether each of these patients are at risk for a drug interaction. Step 4: Loop through our entire cohort and determine each patient's drug interactionsTo recap: we now have a list of patients with associated drug codes in list form, and we know how to query the RXNav API to determine if a drug interaction exists. As a last step, create a series of functions to iterate through our patient list and for each patient return whether or not a Drug on Drug interaction could occur.It might help to compose a helper function for taking a string of RXCodes (e.g., `123456+654321`) and submit it to the API, and returns the result as a formatted JSON.
###Code
# Function for calling NIH API
def has_drug_interaction(drug_list):
drugs = "+".join(drug_list)
try:
url = 'https://rxnav.nlm.nih.gov/REST/interaction/list.json?rxcuis=' + drugs
response = s.get(url, headers={'accept': 'application/json'})
response_json = response.json()
return 'fullInteractionTypeGroup' in response_json
except Exception as e:
raise e
###Output
_____no_output_____
###Markdown
🖐 Test our original two drug combinations to ensure that it is outputting the expected responses.
###Code
# HINT: Note that this function takes as its input a list of strings
# so it will have to be formatted that way
###Output
_____no_output_____ |
notebooks/lesson_1_predict_boston_housing_prices.ipynb | ###Markdown
Boston House Prices dataset=========================== Data Set Characteristics: Number of Instances: 506 :Number of Attributes: 13 numeric/categorical predictive :Median Value (attribute 14) is usually the target :Attribute Information (in order): - CRIM per capita crime rate by town - ZN proportion of residential land zoned for lots over 25,000 sq.ft. - INDUS proportion of non-retail business acres per town - CHAS Charles River dummy variable (= 1 if tract bounds river; 0 otherwise) - NOX nitric oxides concentration (parts per 10 million) - RM average number of rooms per dwelling - AGE proportion of owner-occupied units built prior to 1940 - DIS weighted distances to five Boston employment centres - RAD index of accessibility to radial highways - TAX full-value property-tax rate per 10,000 dollars - PTRATIO pupil-teacher ratio by town - B 1000(Bk - 0.63)^2 where Bk is the proportion of blacks by town - LSTAT % lower status of the population - MEDV Median value of owner-occupied homes in 1000's :Missing Attribute Values: None :Creator: Harrison, D. and Rubinfeld, D.L.This is a copy of UCI ML housing dataset.http://archive.ics.uci.edu/ml/datasets/Housing
###Code
# compute pairwise pearson correlation of each feature and prices
for column_name in column_names[:-1]:
correlation = boston_df[column_name].corr(boston_df['target'])
if abs(correlation) >= 0.5:
print(f'Correlation between {column_name} and Target')
print(correlation)
trace = go.Scatter(
x = boston_df['LSTAT'],
y = boston_df['target'],
mode = 'markers'
)
data = [trace]
layout = go.Layout(
title='Low income family rate vs House Prices',
xaxis=dict(
title='Low income family rate',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
),
yaxis=dict(
title='House Prices',
titlefont=dict(
family='Courier New, monospace',
size=18,
color='#7f7f7f'
)
)
)
config={'showLink': False}
fig = go.Figure(data=data, layout=layout)
iplot(fig, filename='basic-scatter', config=config)
###Output
_____no_output_____
###Markdown
Linear Regression with One Variable Pseudocode from Andrew Ng's Machine Learning Course
###Code
Image(filename="../imgs/lesson_1_linear_regression.png", width=700, height=400)
X = np.array(boston_df['LSTAT']).reshape(-1, 1)
ones = np.ones((len(X),1))
X = np.hstack((ones, X))
y = np.array(boston_df['target'])
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=0.20, random_state=42)
m = X_train.shape[0]
def cost_function(X, y, theta, deriv=False):
hypothesis = X.dot(theta)
error = hypothesis - y
if deriv:
gradient = (1/m) * X.T.dot(error)
return gradient, error
else:
J = 1/(2*m) * error.dot(error)
return J
def gradient_descent(X, y, alpha, epochs, batch_size, theta):
theta_list = []
cost_list = []
for epoch_num in range(epochs):
cost = cost_function(X, y, theta)
cost_list.append(cost)
theta_list.append(theta)
gradient, error = cost_function(X, y, theta, deriv=True)
theta = theta - alpha * gradient
if epoch_num % 1000 == 0:
print(f"cost: {cost}")
return cost_list, theta_list
# randomly initialize theta
theta = np.array([1, -0.5])
alpha = 0.001
epochs = 10000
batch_size = X_train.shape[0] # batch gradient descent
cost_list, theta_list = gradient_descent(X_train, y_train,
alpha, epochs, batch_size, theta)
final_theta = theta_list[-1]
print(final_theta)
## TODO Create animation for updated line and contour plots
###Output
_____no_output_____
###Markdown
Linear Regression with Multiple Variables
###Code
# combine top three correlated features
# complete algorithm
lstat = np.array(boston_df['LSTAT']).reshape(-1, 1)
full_set_size = lstat.shape[0]
ones = np.ones((full_set_size,1))
pupil_teacher_ratio = np.array(boston_df['PTRATIO']).reshape(-1, 1)
rooms = np.array(boston_df['RM']).reshape(-1, 1)
X_multi = np.hstack((ones, lstat, pupil_teacher_ratio, rooms))
y_multi = np.array(boston_df['target'])
X_train_multi, X_test_multi, y_train_multi, y_test_multi = train_test_split(X_multi, y_multi, test_size=0.20, random_state=42)
m = X_train_multi.shape[0]
###Output
_____no_output_____
###Markdown
Gradient Descent Approach
###Code
Image(filename="../imgs/lesson_1_multi_variable_linear_regression.png", width=700, height=400)
# randomly initialize theta
theta_multi = np.array([1, -0.7, -0.5, 0.7])
alpha_multi = 0.001
epochs_multi = 10000
batch_size_multi = m # batch gradient descent
cost_multi_list, theta_multi_list = gradient_descent(X_train_multi, y_train_multi,
alpha_multi, epochs_multi, batch_size_multi, theta_multi)
final_theta_multi = theta_multi_list[-1]
print(final_theta_multi)
###Output
[ 2.04801055 -0.50381761 -0.65300886 6.17587267]
###Markdown
Feature Scaling
###Code
from sklearn.preprocessing import StandardScaler
lstat_scaler = StandardScaler()
ptratio_scaler = StandardScaler()
rm_scaler = StandardScaler()
print(lstat_scaler)
lstat_scaled = lstat_scaler.fit_transform(lstat)
full_set_size = lstat.shape[0]
ones = np.ones((full_set_size,1))
pupil_teacher_ratio_scaled = ptratio_scaler.fit_transform(pupil_teacher_ratio)
rooms_scaled = rm_scaler.fit_transform(rooms)
X_multi_normalized = np.hstack((ones, lstat_scaled, pupil_teacher_ratio_scaled, rooms_scaled))
X_train_multi, X_test_multi, y_train_multi, y_test_multi = train_test_split(X_multi_normalized, y_multi, test_size=0.20, random_state=42)
m = X_train_multi.shape[0]
y_train_multi.shape
# randomly initialize theta
theta_multi = np.array([1, -0.7, -0.5, 0.7])
alpha_multi = 0.001
epochs_multi = 10000
batch_size_multi = m # batch gradient descent
cost_multi_list, theta_multi_list = gradient_descent(X_train_multi, y_train_multi,
alpha_multi, epochs_multi, batch_size_multi, theta_multi)
final_theta_multi = theta_multi_list[-1]
print(final_theta_multi)
###Output
[22.48285741 -4.02720936 -2.07026507 3.34308824]
###Markdown
Normal EquationYou can also use the normal equation instead of gradient descent to avoid the iterations and get the final theta list in one go.This works best when there are fewer n features than m training examples because the computation cost can get expensive as n increases. O(n^3)
###Code
Image(filename="../imgs/lesson_1_gd_vs_ne.png", width=1000, height=700)
final_theta_list = np.linalg.inv(X_train_multi.T.dot(X_train_multi)).dot(X_train_multi.T).dot(y_train_multi)
final_theta_list
###Output
_____no_output_____ |
docs/gallery/plot_skewsurge.ipynb | ###Markdown
Skew surge examples
###Code
import pandas as pd
import toto
import matplotlib.pyplot as plt
from toto.inputs.nc import NCfile
import os
# read the file
filename='https://raw.githubusercontent.com/calypso-science/Toto/master/_tests/nc_file/elevation.nc'
os.system('wget %s '% filename)
df=NCfile('elevation.nc')._toDataFrame()
# Processing
df_new=df[0].TideAnalysis.skew_surge(mag='elev40',args={'latitude':-36})
# Plot the results
fig, ax = plt.subplots(1)
ax.plot(df[0].index,df[0]['elev40'],label='Elevation')
ax.plot(df_new.index,df_new['skew_surge'],label='Skew surge')
ax.legend()
fig.autofmt_xdate()
plt.show()
###Output
_____no_output_____ |
docs/content/04_auto_naive.ipynb | ###Markdown
Automatically selecting a naive model to use as a benchmarkforecast-tools provides a `auto_naive` function that uses point-forecast cross validation to select the 'best' naive model to use as a benchmark. The function tests all of the naive `Forecast` methods.This notebook covers how to use `auto_naive` and also how to trouble shoot it use if there are conflicts between parameters. Imports
###Code
import sys
# if running in Google Colab install forecast-tools
if 'google.colab' in sys.modules:
!pip install forecast-tools
import numpy as np
from forecast_tools.datasets import load_emergency_dept
from forecast_tools.model_selection import auto_naive
help(auto_naive)
###Output
Help on function auto_naive in module forecast_tools.model_selection:
auto_naive(y_train, horizon=1, seasonal_period=1, min_train_size='auto', method='cv', step=1, window_size='auto', metric='mae')
Automatic selection of the 'best' naive benchmark on a 'single' series
The selection process uses out-of-sample cv performance.
By default auto_naive uses cross validation to estimate the mean
point forecast peformance of all naive methods. It selects the method
with the lowest point forecast metric on average.
If there is limited data for training a basic holdout sample could be
used.
Dev note: the plan is to update this to work with multiple series.
It would be best to use MASE for multiple series comparison.
Parameters:
----------
y_train: array-like
training data. typically in a pandas.Series, pandas.DataFrame
or numpy.ndarray format.
horizon: int, optional (default=1)
Forecast horizon.
seasonal_period: int, optional (default=1)
Frequency of the data. E.g. 7 for weekly pattern, 12 for monthly
365 for daily.
min_train_size: int or str, optional (default='auto')
The size of the initial training set (if method=='ro' or 'sw').
If 'auto' then then min_train_size is set to len(y_train) // 3
If main_train_size='auto' and method='holdout' then
min_train_size = len(y_train) - horizon.
method: str, optional (default='cv')
out of sample selection method.
'ro' - rolling forecast origin
'sw' - sliding window
'cv' - scores from both ro and sw
'holdout' - single train/test split
Methods'ro' and 'sw' are similar, however, sw has a fixed
window_size and drops older data from training.
step: int, optional (default=1)
The stride/step of the cross-validation. I.e. the number
of observations to move forward between folds.
window_size: str or int, optional (default='auto')
The window_size if using sliding window cross validation
When 'auto' and method='sw' then
window_size=len(y_train) // 3
metric: str, optional (default='mae')
The metric to measure out of sample accuracy.
Options: mase, mae, mape, smape, mse, rmse, me.
Returns:
--------
dict
'model': baseline.Forecast
f'{metric}': float
Contains the model and its CV performance.
Raises:
-------
ValueError
For invalid method, metric, window_size parameters
See Also:
--------
forecast_tools.baseline.Naive1
forecast_tools.baseline.SNaive
forecast_tools.baseline.Drift
forecast_tools.baseline.Average
forecast_tools.baseline.EnsembleNaive
forecast_tools.baseline.baseline_estimators
forecast_tools.model_selection.rolling_forecast_origin
forecast_tools.model_selection.sliding_window
forecast_tools.model_selection.mase_cross_validation_score
forecast_tools.metrics.mean_absolute_scaled_error
Examples:
---------
Measuring MAE and taking the best method using both
rolling origin and sliding window cross validation
of a 56 day forecast.
>>> from forecast_tools.datasets import load_emergency_dept
>>> y_train = load_emergency_dept
>>> best = auto_naive(y_train, seasonal_period=7, horizon=56)
>>> best
{'model': Average(), 'mae': 19.63791579700355}
Take a step of 7 days between cv folds.
>>> from forecast_tools.datasets import load_emergency_dept
>>> y_train = load_emergency_dept
>>> best = auto_naive(y_train, seasonal_period=7, horizon=56,
... step=7)
>>> best
{'model': Average(), 'mae': 19.675635558539383}
###Markdown
Load the training data
###Code
y_train = load_emergency_dept()
###Output
_____no_output_____
###Markdown
Select the best naive model for a h-step horizon of 7 days.Let's select a method for the emergency deparment daily level to predict 7 days ahead. By default the function using the **mean absolute error** to evaluate forecast accuracy.
###Code
best = auto_naive(y_train, horizon=7, seasonal_period=7)
best
y_preds = best['model'].fit_predict(y_train, horizon=7)
y_preds
###Output
_____no_output_____
###Markdown
Using a different forecasting error metric
###Code
best = auto_naive(y_train, horizon=7, seasonal_period=7, metric='mape')
best
###Output
_____no_output_____
###Markdown
Using a single train-test split when data are limited.If your forecast horizon means that h-step cross-validation is infeasible then you can automatically select using a single holdout sample.
###Code
best = auto_naive(y_train, horizon=7, seasonal_period=7, method='holdout')
best
###Output
_____no_output_____
###Markdown
Trouble shooting use of `auto_naive`**Problem 1:** Training data is shorter than the `min_train_size` + `horizon`For any validation to take place, including a simple holdout - the time series used must allow at least one train test split to take place. This can be a problem when seasonal_period is set to a length similar to the length of the time series.
###Code
# generate a synthetic daily time series of exactly one year in length.
y_train = np.random.randint(100, 250, size=365)
###Output
_____no_output_____
###Markdown
Let's set seasonal period to `seasonal_period=365` (the length of the time series) and `horizon=7`.We will also manually set `min_train_size=365`This will generate a `ValueError` reporting that the "The training data is shorter than min_train_size + horizon No validation can be performed."
###Code
best = auto_naive(y_train, horizon=7, seasonal_period=365, method='ro',
min_train_size=365, metric='mae')
best
###Output
_____no_output_____
###Markdown
A longer time series or a shorter seasonal period will fix this problem.
###Code
# a longer synthetic time series.
y_train = np.random.randint(100, 250, size=365+7)
best = auto_naive(y_train, horizon=7, seasonal_period=365, method='ro',
min_train_size=365, metric='mae')
best
# a shorter seasonal period and minimum training size
y_train = np.random.randint(100, 250, size=365)
best = auto_naive(y_train, horizon=7, seasonal_period=7, method='ro',
min_train_size=7, metric='mae')
best
###Output
_____no_output_____ |
5. Computer Vision/.ipynb_checkpoints/Feature Detection-checkpoint.ipynb | ###Markdown
Author: Vo, Huynh Quang Nguyen
###Code
import cv2
import numpy as np
import random
import matplotlib.pyplot as plt
from ipywidgets import interact
from sklearn.preprocessing import minmax_scale
###Output
_____no_output_____ |
Pytorch/tensor/tensor.ipynb | ###Markdown
Tensor - 判断obj是否为一个pytorch对象,是则返回Truetorch.is_tensor(obj)- 判断obj是否为一个Pytorch storage对象,是则返回Truetorch.is_storage(obj)- 设置当前tensor的默认数据类型torch.set_default_tensor_type(t)
###Code
torch.tensor([1.2, 3]).dtype
torch.set_default_tensor_type(torch.DoubleTensor)
torch.tensor([1.2, 3]).dtype
###Output
_____no_output_____
###Markdown
- 返回input张量中的元素个数.input(Tensor)表示输入张量torch.numel(input)->int
###Code
a = torch.randn(1, 2, 3, 4, 5)
torch.numel(a)
a = torch.zeros(4, 4)
torch.numel(a)
###Output
_____no_output_____ |
deeplearning1/nbs/fisheries_daniel.ipynb | ###Markdown
Major Key: SGD did way better than Adam on this
###Code
myModel.optimizer.lr=1e-3
myModel.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=2,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/2
307/307 [==============================] - 11s - loss: 1.0734 - acc: 0.7199 - val_loss: 1.9440 - val_acc: 0.4883
Epoch 2/2
307/307 [==============================] - 11s - loss: 0.7735 - acc: 0.8274 - val_loss: 1.7796 - val_acc: 0.4883
###Markdown
Increased lr by a factor of 10--went even better! 60.56% on validation but 97% on training... underfitting???
###Code
from keras.optimizers import Nadam
myModel.summary()
myModel.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=2,
validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/2
307/307 [==============================] - 11s - loss: 0.6615 - acc: 0.8697 - val_loss: 1.8520 - val_acc: 0.5634
Epoch 2/2
307/307 [==============================] - 11s - loss: 0.5587 - acc: 0.9055 - val_loss: 1.7588 - val_acc: 0.5775
###Markdown
Let's try some data augmentation! I'll also now try 10 epochs and see if things improve.
###Code
gen_t = image.ImageDataGenerator(width_shift_range=0.1)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
myModel = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Convolution2D(3,3,32,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(pool_size=(2,2)),
Flatten(),
Dense(8,activation='softmax', W_regularizer=l2(0.01))
])
newModel = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Convolution2D(3,3,32,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(pool_size=(2,2)),
Flatten(),
Dense(8,activation='softmax', W_regularizer=l2(0.01))
])
def nadam1(batches):
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Convolution2D(3,3,32,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(pool_size=(2,2)),
Flatten(),
Dense(8,activation='softmax', W_regularizer=l2(0.01))
])
model.compile(Nadam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
# print("Nadam:")
# myModel.compile(Nadam, loss='categorical_crossentropy',metrics=['accuracy'])
# myModel.fit_generator(batches, batches.nb_sample, nb_epoch=10,
# validation_data=val_batches, nb_val_samples=val_batches.nb_sample)
print("Nadam w slower rate, 5 epochs")
model = nadam1(batches)
###Output
Nadam w slower rate, 5 epochs
Epoch 1/10
307/307 [==============================] - 12s - loss: 2.5352 - acc: 0.3811 - val_loss: 2.7860 - val_acc: 0.4648
Epoch 2/10
307/307 [==============================] - 11s - loss: 1.9833 - acc: 0.4919 - val_loss: 2.4024 - val_acc: 0.4695
Epoch 3/10
307/307 [==============================] - 11s - loss: 1.6503 - acc: 0.5700 - val_loss: 2.1790 - val_acc: 0.4225
Epoch 4/10
307/307 [==============================] - 11s - loss: 1.4354 - acc: 0.6384 - val_loss: 2.1610 - val_acc: 0.4930
Epoch 5/10
307/307 [==============================] - 11s - loss: 1.3813 - acc: 0.6319 - val_loss: 2.0069 - val_acc: 0.5117
Epoch 6/10
307/307 [==============================] - 11s - loss: 1.3305 - acc: 0.6645 - val_loss: 2.3662 - val_acc: 0.4742
Epoch 7/10
307/307 [==============================] - 11s - loss: 1.1625 - acc: 0.7068 - val_loss: 1.8504 - val_acc: 0.5822
Epoch 8/10
307/307 [==============================] - 11s - loss: 0.9239 - acc: 0.7590 - val_loss: 1.9372 - val_acc: 0.5493
Epoch 9/10
307/307 [==============================] - 11s - loss: 1.0125 - acc: 0.7459 - val_loss: 1.8556 - val_acc: 0.6150
Epoch 10/10
307/307 [==============================] - 11s - loss: 0.8473 - acc: 0.7915 - val_loss: 1.9488 - val_acc: 0.5634
Epoch 1/10
307/307 [==============================] - 12s - loss: 0.9930 - acc: 0.7362 - val_loss: 1.9149 - val_acc: 0.5728
Epoch 2/10
307/307 [==============================] - 11s - loss: 0.8264 - acc: 0.7948 - val_loss: 1.8654 - val_acc: 0.5962
Epoch 3/10
307/307 [==============================] - 11s - loss: 0.8495 - acc: 0.8078 - val_loss: 1.9818 - val_acc: 0.5822
Epoch 4/10
307/307 [==============================] - 11s - loss: 0.7029 - acc: 0.8469 - val_loss: 1.7591 - val_acc: 0.6103
Epoch 5/10
307/307 [==============================] - 11s - loss: 0.8047 - acc: 0.7948 - val_loss: 1.8144 - val_acc: 0.5869
Epoch 6/10
307/307 [==============================] - 11s - loss: 0.8310 - acc: 0.7948 - val_loss: 2.2518 - val_acc: 0.5681
Epoch 7/10
307/307 [==============================] - 11s - loss: 0.6199 - acc: 0.8534 - val_loss: 1.9282 - val_acc: 0.6103
Epoch 8/10
307/307 [==============================] - 10s - loss: 0.5734 - acc: 0.8697 - val_loss: 2.2699 - val_acc: 0.6103
Epoch 9/10
307/307 [==============================] - 11s - loss: 0.6346 - acc: 0.8404 - val_loss: 2.0220 - val_acc: 0.6197
Epoch 10/10
307/307 [==============================] - 11s - loss: 0.5734 - acc: 0.8958 - val_loss: 2.0441 - val_acc: 0.6197
###Markdown
Data augmentation with everything!
###Code
gen_t = image.ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.05,
shear_range=0.1, rotation_range=15, channel_shift_range=20)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model = nadam1(batches)
###Output
Epoch 1/10
307/307 [==============================] - 12s - loss: 2.4586 - acc: 0.3876 - val_loss: 3.0396 - val_acc: 0.3427
Epoch 2/10
307/307 [==============================] - 11s - loss: 2.1106 - acc: 0.4658 - val_loss: 2.1184 - val_acc: 0.4789
Epoch 3/10
307/307 [==============================] - 11s - loss: 1.9526 - acc: 0.5081 - val_loss: 2.2154 - val_acc: 0.4695
Epoch 4/10
307/307 [==============================] - 11s - loss: 1.9679 - acc: 0.5277 - val_loss: 2.2608 - val_acc: 0.4554
Epoch 5/10
307/307 [==============================] - 11s - loss: 1.9388 - acc: 0.4853 - val_loss: 2.0875 - val_acc: 0.5023
Epoch 6/10
307/307 [==============================] - 11s - loss: 1.9619 - acc: 0.5277 - val_loss: 2.2217 - val_acc: 0.5070
Epoch 7/10
307/307 [==============================] - 11s - loss: 1.8347 - acc: 0.5537 - val_loss: 2.3904 - val_acc: 0.4930
Epoch 8/10
307/307 [==============================] - 11s - loss: 1.8023 - acc: 0.5635 - val_loss: 2.2147 - val_acc: 0.5258
Epoch 9/10
307/307 [==============================] - 11s - loss: 1.8591 - acc: 0.5603 - val_loss: 2.2778 - val_acc: 0.5164
Epoch 10/10
307/307 [==============================] - 11s - loss: 1.5403 - acc: 0.6059 - val_loss: 2.3472 - val_acc: 0.5164
Epoch 1/10
307/307 [==============================] - 12s - loss: 1.6693 - acc: 0.5993 - val_loss: 2.5966 - val_acc: 0.5352
Epoch 2/10
307/307 [==============================] - 11s - loss: 1.8011 - acc: 0.5765 - val_loss: 2.5407 - val_acc: 0.5117
Epoch 3/10
307/307 [==============================] - 11s - loss: 1.5340 - acc: 0.5961 - val_loss: 2.6173 - val_acc: 0.5634
Epoch 4/10
307/307 [==============================] - 11s - loss: 1.6147 - acc: 0.6059 - val_loss: 2.4969 - val_acc: 0.5258
Epoch 5/10
307/307 [==============================] - 11s - loss: 1.5844 - acc: 0.5896 - val_loss: 2.5623 - val_acc: 0.4742
Epoch 6/10
307/307 [==============================] - 11s - loss: 1.5849 - acc: 0.5961 - val_loss: 2.6317 - val_acc: 0.5305
Epoch 7/10
307/307 [==============================] - 11s - loss: 1.5488 - acc: 0.5896 - val_loss: 2.6498 - val_acc: 0.5399
Epoch 8/10
307/307 [==============================] - 11s - loss: 1.5577 - acc: 0.5668 - val_loss: 2.4135 - val_acc: 0.5352
Epoch 9/10
307/307 [==============================] - 11s - loss: 1.6690 - acc: 0.5961 - val_loss: 2.5189 - val_acc: 0.5211
Epoch 10/10
307/307 [==============================] - 11s - loss: 1.4140 - acc: 0.6417 - val_loss: 2.7212 - val_acc: 0.5540
###Markdown
It's getting better--let's try turning down lr and running more epochs! Findings:We keep getting better! Epoch 5 of Nadam w a slower rate seems to be best, but it looks like we're overfitting on the training data a little bit... Let's come back and see what we can do to reduce this overfitting. Also, for some reason when I take things out of functions it starts giving me errors. I can't use model.optimizer.lr when I have model as a function but when I take it out I get a host of other errors.I'm also getting some cases where as the epoch continues the accuracy gets worse... Also in this specific case my val_loss was the highest on the final epoch.
###Code
model.optimizer.lr = 0.0001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/10
307/307 [==============================] - 12s - loss: 1.4011 - acc: 0.6678 - val_loss: 2.4876 - val_acc: 0.5540
Epoch 2/10
307/307 [==============================] - 11s - loss: 1.4969 - acc: 0.6482 - val_loss: 2.3973 - val_acc: 0.5634
Epoch 3/10
307/307 [==============================] - 11s - loss: 1.5251 - acc: 0.6254 - val_loss: 2.6143 - val_acc: 0.4977
Epoch 4/10
307/307 [==============================] - 11s - loss: 1.4222 - acc: 0.6124 - val_loss: 2.5558 - val_acc: 0.5587
Epoch 5/10
307/307 [==============================] - 11s - loss: 1.6291 - acc: 0.6059 - val_loss: 2.6840 - val_acc: 0.5446
Epoch 6/10
307/307 [==============================] - 11s - loss: 1.5204 - acc: 0.6319 - val_loss: 2.4550 - val_acc: 0.5352
Epoch 7/10
307/307 [==============================] - 11s - loss: 1.3750 - acc: 0.6384 - val_loss: 2.2908 - val_acc: 0.5681
Epoch 8/10
307/307 [==============================] - 11s - loss: 1.3818 - acc: 0.6710 - val_loss: 2.6775 - val_acc: 0.5915
Epoch 9/10
307/307 [==============================] - 11s - loss: 1.5074 - acc: 0.6450 - val_loss: 2.6073 - val_acc: 0.5258
Epoch 10/10
307/307 [==============================] - 11s - loss: 1.7378 - acc: 0.5896 - val_loss: 2.2587 - val_acc: 0.6056
###Markdown
What I know:We're most likely underfitting--val accuracy/loss gets worse as time goes on sometimes, it doesn't seem like there's a consistent improvement or decline though. I've added both data aug and L2 regularization, so I'm not sure what other ways might help with what I'm trying to do. Maybe remove L2 since it technically prevents ~overfitting?Let's also try less data aug.
###Code
gen_t = image.ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
def nadam2(batches):
#same as before, just without L2!
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Convolution2D(3,3,32,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D(pool_size=(2,2)),
Flatten(),
Dense(8,activation='softmax')
])
model.compile(Nadam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=10, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
return model
model2 = nadam2(batches)
###Output
Epoch 1/10
307/307 [==============================] - 12s - loss: 2.2928 - acc: 0.3909 - val_loss: 2.5572 - val_acc: 0.4413
Epoch 2/10
307/307 [==============================] - 11s - loss: 1.9985 - acc: 0.4756 - val_loss: 2.1686 - val_acc: 0.3709
Epoch 3/10
307/307 [==============================] - 11s - loss: 1.5476 - acc: 0.5700 - val_loss: 2.1391 - val_acc: 0.4460
Epoch 4/10
307/307 [==============================] - 11s - loss: 1.8373 - acc: 0.4886 - val_loss: 2.7411 - val_acc: 0.3709
Epoch 5/10
307/307 [==============================] - 11s - loss: 1.6762 - acc: 0.5570 - val_loss: 2.3732 - val_acc: 0.4789
Epoch 6/10
307/307 [==============================] - 11s - loss: 1.5732 - acc: 0.6091 - val_loss: 2.1126 - val_acc: 0.5728
Epoch 7/10
307/307 [==============================] - 11s - loss: 1.2451 - acc: 0.6515 - val_loss: 2.2953 - val_acc: 0.5070
Epoch 8/10
307/307 [==============================] - 11s - loss: 1.6839 - acc: 0.5961 - val_loss: 2.3208 - val_acc: 0.4836
Epoch 9/10
307/307 [==============================] - 11s - loss: 1.4500 - acc: 0.6352 - val_loss: 2.2735 - val_acc: 0.4695
Epoch 10/10
307/307 [==============================] - 11s - loss: 1.3605 - acc: 0.6319 - val_loss: 2.1790 - val_acc: 0.4930
Epoch 1/10
307/307 [==============================] - 12s - loss: 1.3481 - acc: 0.6384 - val_loss: 2.0906 - val_acc: 0.5634
Epoch 2/10
307/307 [==============================] - 11s - loss: 1.2524 - acc: 0.6840 - val_loss: 2.2438 - val_acc: 0.5164
Epoch 3/10
307/307 [==============================] - 11s - loss: 1.2339 - acc: 0.6515 - val_loss: 2.3701 - val_acc: 0.5258
Epoch 4/10
307/307 [==============================] - 11s - loss: 1.2960 - acc: 0.6221 - val_loss: 2.1161 - val_acc: 0.5446
Epoch 5/10
307/307 [==============================] - 11s - loss: 1.3181 - acc: 0.6352 - val_loss: 2.1475 - val_acc: 0.5493
Epoch 6/10
307/307 [==============================] - 11s - loss: 1.3639 - acc: 0.6287 - val_loss: 2.3719 - val_acc: 0.4554
Epoch 7/10
307/307 [==============================] - 11s - loss: 1.1513 - acc: 0.6612 - val_loss: 2.3568 - val_acc: 0.5070
Epoch 8/10
307/307 [==============================] - 11s - loss: 1.1248 - acc: 0.6515 - val_loss: 2.3735 - val_acc: 0.4742
Epoch 9/10
307/307 [==============================] - 11s - loss: 0.9928 - acc: 0.7362 - val_loss: 2.3203 - val_acc: 0.5540
Epoch 10/10
307/307 [==============================] - 11s - loss: 1.1045 - acc: 0.6808 - val_loss: 2.2094 - val_acc: 0.5446
###Markdown
Baseline: Simple Linear Model
###Code
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Flatten(),
Dense(8,activation='softmax')
])
model.compile('Adam', loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, train_batches.nb_sample,nb_epoch=3, validation_data = val_batches,
nb_val_samples=val_batches.nb_sample)
model.summary()
np.round(model.predict_generator(train_batches, train_batches.N)[:10],2)
###Output
_____no_output_____
###Markdown
Let's try a lower learning rate!
###Code
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
307/307 [==============================] - 12s - loss: 8.0328 - acc: 0.5016 - val_loss: 9.5571 - val_acc: 0.4038
Epoch 2/3
307/307 [==============================] - 11s - loss: 8.0328 - acc: 0.5016 - val_loss: 9.5918 - val_acc: 0.4038
Epoch 3/3
307/307 [==============================] - 11s - loss: 8.0328 - acc: 0.5016 - val_loss: 9.5874 - val_acc: 0.4038
###Markdown
Huh, val_acc = 0.4038 for every one of the above--that's interesting consistency.
###Code
model.optimizer.lr = 0.001
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
307/307 [==============================] - 12s - loss: 8.0328 - acc: 0.5016 - val_loss: 9.5942 - val_acc: 0.3991
Epoch 2/3
307/307 [==============================] - 11s - loss: 8.0328 - acc: 0.5016 - val_loss: 9.6017 - val_acc: 0.3991
Epoch 3/3
307/307 [==============================] - 12s - loss: 8.0328 - acc: 0.5016 - val_loss: 9.6053 - val_acc: 0.3991
###Markdown
Again, acc=0.5016 the same as before, while val_acc is lower but only slightly so at 0.3991--but they're all the same again!! So we've hit some sort of consistency but we're still probably at either some local minimum or jumping too far to hit the global?S
###Code
rnd_batches = get_batches(path+'valid', batch_size=batch_size*2, shuffle=True)
val_res = [model.evaluate_generator(rnd_batches, rnd_batches.nb_sample) for i in range(10)]
np.round(val_res,2)
###Output
_____no_output_____
###Markdown
From StackOverflow: evaluate_generator uses both your test input and output. It first predicts output using training input and then evaluates performance by comparing it against your test output. So it gives out a measure of performance, i.e. accuracy in your case.So what we just did above was see what the performance of our model is on 10 things so we can see if there are statstically significant differences in performance. Turns out it's pretty consistent:min_acc = 0.38max_acc = 0.42 Linear w/ L2 Regularization
###Code
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Flatten(),
Dense(8,activation='softmax', W_regularizer=l2(0.01))
])
model.compile(Adam(lr=10e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
307/307 [==============================] - 13s - loss: 11.0260 - acc: 0.1889 - val_loss: 12.8843 - val_acc: 0.1784
Epoch 2/3
307/307 [==============================] - 11s - loss: 9.8730 - acc: 0.3453 - val_loss: 12.3482 - val_acc: 0.1737
Epoch 3/3
307/307 [==============================] - 11s - loss: 7.8282 - acc: 0.4788 - val_loss: 9.7379 - val_acc: 0.3709
###Markdown
So that got... worse? Except for the last val_acc=.3709 which seems close to what we had before but now val_acc is around .17
###Code
model.optimizer.lr=0.001
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
307/307 [==============================] - 12s - loss: 6.9488 - acc: 0.5537 - val_loss: 8.8606 - val_acc: 0.4085
Epoch 2/3
307/307 [==============================] - 11s - loss: 6.4381 - acc: 0.5798 - val_loss: 7.6627 - val_acc: 0.5023
Epoch 3/3
307/307 [==============================] - 13s - loss: 5.8013 - acc: 0.6287 - val_loss: 7.6443 - val_acc: 0.5258
###Markdown
woahhhh we jumped like 10 whole percents Single layer!
###Code
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Flatten(),
Dense(100, activation='relu'),
BatchNormalization(),
Dense(8,activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
307/307 [==============================] - 12s - loss: 2.4102 - acc: 0.1107 - val_loss: 5.9550 - val_acc: 0.0516
Epoch 2/3
307/307 [==============================] - 11s - loss: 2.2192 - acc: 0.1433 - val_loss: 3.3458 - val_acc: 0.0704
Epoch 3/3
307/307 [==============================] - 11s - loss: 2.1259 - acc: 0.1759 - val_loss: 2.7295 - val_acc: 0.0939
###Markdown
Full Dataset? Let's try our models on the full dataset to see wtf happens
###Code
#redefine location:
path = "data/fisheries/"
batch_size=64
val_batches = get_batches(path+'valid', shuffle = False, batch_size=batch_size)
train_batches = get_batches(path+'train', shuffle = False, batch_size=batch_size)
train_data = get_data(path+'train')
val_data = get_data(path+'valid')
model_path = path + 'models/'
if not os.path.exists(model_path): os.mkdir(model_path)
save_array(model_path+'train_data.bc',train_data)
save_array(model_path+'val_data.bc',val_data)
train_data = load_array(model_path+'train_data.bc')
val_data = load_array(model_path+'val_data.bc')
train_classes = train_batches.classes
train_labels = onehot(train_classes)
val_classes = val_batches.classes
val_labels = onehot(val_classes)
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Flatten(),
Dense(100, activation='relu'),
BatchNormalization(),
Dense(8,activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr=0.001
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/5
3021/3021 [==============================] - 88s - loss: 2.0324 - acc: 0.2171 - val_loss: 1.9101 - val_acc: 0.2540
Epoch 2/5
3021/3021 [==============================] - 74s - loss: 2.0216 - acc: 0.2122 - val_loss: 1.8664 - val_acc: 0.2844
Epoch 3/5
3021/3021 [==============================] - 73s - loss: 2.0090 - acc: 0.2105 - val_loss: 1.8136 - val_acc: 0.3280
Epoch 4/5
3021/3021 [==============================] - 74s - loss: 1.9979 - acc: 0.2387 - val_loss: 1.7976 - val_acc: 0.3836
Epoch 5/5
3021/3021 [==============================] - 74s - loss: 1.9893 - acc: 0.2483 - val_loss: 1.7904 - val_acc: 0.3730
###Markdown
Let's try a different batch size.
###Code
batch_size=32
val_batches = get_batches(path+'valid', shuffle = False, batch_size=batch_size)
train_batches = get_batches(path+'train', shuffle = False, batch_size=batch_size)
###Output
Found 756 images belonging to 8 classes.
Found 3021 images belonging to 8 classes.
###Markdown
ok we need to actually do smth w batch size
###Code
val_batches = get_batches(path+'valid', shuffle = False, batch_size=batch_size)
train_batches = get_batches(path+'train', shuffle = False, batch_size=batch_size)
model.optimizer.lr=0.001
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
3021/3021 [==============================] - 86s - loss: 2.0613 - acc: 0.1837 - val_loss: 1.7948 - val_acc: 0.3611
Epoch 2/3
3021/3021 [==============================] - 79s - loss: 2.0496 - acc: 0.1870 - val_loss: 1.7940 - val_acc: 0.3505
Epoch 3/3
3021/3021 [==============================] - 78s - loss: 2.0377 - acc: 0.1903 - val_loss: 1.8203 - val_acc: 0.3148
###Markdown
Data Augmentation
###Code
gen_t = image.ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.05,
shear_range=0.1, rotation_range=15, channel_shift_range=20)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
3021/3021 [==============================] - 86s - loss: 1.3066 - acc: 0.6001 - val_loss: 0.9981 - val_acc: 0.7050
Epoch 2/3
3021/3021 [==============================] - 79s - loss: 1.3039 - acc: 0.5995 - val_loss: 1.0082 - val_acc: 0.7183
Epoch 3/3
3021/3021 [==============================] - 78s - loss: 1.2326 - acc: 0.6379 - val_loss: 0.8996 - val_acc: 0.7474
###Markdown
Let's try yet another batch size!
###Code
batch_size=4
val_batches = get_batches(path+'valid', shuffle = False, batch_size=batch_size)
train_batches = get_batches(path+'train', shuffle = False, batch_size=batch_size)
gen_t = image.ImageDataGenerator(width_shift_range=0.1, height_shift_range=0.05,
shear_range=0.1, rotation_range=15, channel_shift_range=20)
batches = get_batches(path+'train', gen_t, batch_size=batch_size)
model.fit_generator(batches, batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
3021/3021 [==============================] - 87s - loss: 1.5894 - acc: 0.4869 - val_loss: 1.1775 - val_acc: 0.6429
Epoch 2/3
3021/3021 [==============================] - 91s - loss: 1.5771 - acc: 0.4796 - val_loss: 1.2394 - val_acc: 0.6336
Epoch 3/3
3021/3021 [==============================] - 85s - loss: 1.5593 - acc: 0.4856 - val_loss: 1.1184 - val_acc: 0.6521
###Markdown
Multiple Conv Layers, No Dropout
###Code
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Convolution2D(32,3,3,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64,3,3,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(8,activation='softmax')
])
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
3021/3021 [==============================] - 85s - loss: 5.7406 - acc: 0.6319 - val_loss: 11.6924 - val_acc: 0.2407
Epoch 2/3
3021/3021 [==============================] - 84s - loss: 9.2482 - acc: 0.4035 - val_loss: 12.6926 - val_acc: 0.2050
Epoch 3/3
3021/3021 [==============================] - 84s - loss: 7.7163 - acc: 0.4975 - val_loss: 8.3025 - val_acc: 0.4153
Epoch 1/3
3021/3021 [==============================] - 84s - loss: 6.3223 - acc: 0.5885 - val_loss: 10.1639 - val_acc: 0.3280
Epoch 2/3
3021/3021 [==============================] - 83s - loss: 5.3827 - acc: 0.6485 - val_loss: 9.4309 - val_acc: 0.3545
Epoch 3/3
3021/3021 [==============================] - 83s - loss: 5.6896 - acc: 0.6236 - val_loss: 11.1571 - val_acc: 0.2632
###Markdown
Now with augmented data!
###Code
model.compile(Adam(lr=1e-4), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(batches, batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
3021/3021 [==============================] - 87s - loss: 6.5129 - acc: 0.5081 - val_loss: 4.5715 - val_acc: 0.6548
Epoch 2/3
3021/3021 [==============================] - 86s - loss: 4.8140 - acc: 0.5770 - val_loss: 4.6160 - val_acc: 0.5979
Epoch 3/3
3021/3021 [==============================] - 84s - loss: 4.3397 - acc: 0.6071 - val_loss: 3.0918 - val_acc: 0.7063
Epoch 1/3
3021/3021 [==============================] - 85s - loss: 3.8087 - acc: 0.6200 - val_loss: 3.2387 - val_acc: 0.7011
Epoch 2/3
3021/3021 [==============================] - 85s - loss: 3.3493 - acc: 0.6352 - val_loss: 2.8213 - val_acc: 0.7143
Epoch 3/3
3021/3021 [==============================] - 84s - loss: 2.9250 - acc: 0.6614 - val_loss: 2.1025 - val_acc: 0.7646
###Markdown
Stanford-Recommended Model
###Code
model = Sequential([
BatchNormalization(axis=1,input_shape=(3,224,224)),
Convolution2D(32,3,3,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Convolution2D(64,3,3,activation='relu'),
BatchNormalization(axis=1),
MaxPooling2D((3,3)),
Flatten(),
Dense(100,activation='relu'),
BatchNormalization(axis=1),
Dense(8,activation='softmax')
])
model.compile(Adam(lr=1e-5), loss='categorical_crossentropy', metrics=['accuracy'])
model.fit_generator(train_batches, train_batches.nb_sample, nb_epoch=5, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
model.optimizer.lr = 0.001
model.fit_generator(batches, batches.nb_sample, nb_epoch=3, validation_data=val_batches,
nb_val_samples=val_batches.nb_sample)
###Output
Epoch 1/3
3021/3021 [==============================] - 86s - loss: 1.7861 - acc: 0.3853 - val_loss: 1.7726 - val_acc: 0.4683
Epoch 2/3
3021/3021 [==============================] - 85s - loss: 1.6163 - acc: 0.4657 - val_loss: 1.4920 - val_acc: 0.5251
Epoch 3/3
3021/3021 [==============================] - 85s - loss: 1.5182 - acc: 0.5197 - val_loss: 1.7912 - val_acc: 0.5860
###Markdown
Pseudo Labels First... Let's grab ImageNet.
###Code
vgg = Vgg16()
model = vgg.model
#gets last convolutional layer in the model so we can grab its output shape
last_conv_idx = [i for i,l in enumerate(model.layers) if type(l) is Convolution2D][-1]
conv_layers = model.layers[:last_conv_idx+1]
conv_model = Sequential(conv_layers)
###Output
_____no_output_____
###Markdown
Now we'll make a bn model with a simplified version of vgg dense layers
###Code
def get_bn_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p/2),
Dense(128,activation='relu'),
BatchNormalization(),
Dropout(p/2),
Dense(128,activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(8,activation='softmax')
]
###Output
_____no_output_____
###Markdown
Now we'll create features from vgg
###Code
batches = get_batches(path+'train', batch_size=64, shuffle=False)
#test_batches = get_batches(path+'test_stg1', shuffle=False, batch_size=1)
(val_classes, trn_classes, val_labels, trn_labels,
val_filenames, filenames, test_filenames) = get_classes(path)
conv_feat = conv_model.predict_generator(batches,batches.nb_sample)
conv_val_feat = conv_model.predict_generator(val_batches,val_batches.nb_sample)
conv_test_feat = conv_model.predict_generator(test_batches,test_batches.nb_sample)
save_array(path+'results/conv_val_feat.dat', conv_val_feat)
save_array(path+'results/conv_test_feat.dat', conv_test_feat)
save_array(path+'results/conv_feat.dat', conv_feat)
conv_feat = load_array(path+'results/conv_feat.dat')
conv_val_feat = load_array(path+'results/conv_val_feat.dat')
conv_val_feat.shape
def get_bn_da_layers(p):
return [
MaxPooling2D(input_shape=conv_layers[-1].output_shape[1:]),
Flatten(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(256, activation='relu'),
BatchNormalization(),
Dropout(p),
Dense(10, activation='softmax')
]
p=0.8
bn_model=Sequential(get_bn_da_layers(p))
bn_model.compile(Adam(lr=1e-5), loss="categorical_crossentropy",metrics=["accuracy"])
#data aug
gen_t = image.ImageDataGenerator(rotation_range=15, height_shift_range=0.05,
shear_range=0.1, channel_shift_range=20, width_shift_range=0.1)
da_batches = get_batches(path+'train', gen_t, batch_size=batch_size, shuffle=False)
da_conv_feat = conv_model.predict_generator(da_batches,da_batches.nb_sample*5)
save_array(path+'results/da_conv_feat2.dat', da_conv_feat)
da_conv_feat = load_array(path+'results/da_conv_feat2.dat')
da_conv_feat = np.concatenate([da_conv_feat, conv_feat])
da_trn_labels = np.concatenate([trn_labels]*6)
###Output
_____no_output_____
###Markdown
Now for pseudo labeling!
###Code
val_pseudo = bn_model.predict(conv_val_feat,batch_size=batch_size)
comb_pseudo = np.concatenate([da_trn_labels,val_pseudo])
comb_feat = np.concatenate([da_conv_feat, conv_val_feat])
###Output
_____no_output_____
###Markdown
To tune the model up we'll need to train the conv thing
###Code
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=1,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.01
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.0001
bn_model.fit(da_conv_feat, da_trn_labels, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/da_conv8_1.h5')
###Output
_____no_output_____
###Markdown
Now to load the model!
###Code
bn_model.load_weights(path+'models/da_conv8_1.h5')
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size,nb_epoch=1,
validation_data=(conv_val_feat,val_labels))
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.optimizer.lr=0.00001
bn_model.fit(comb_feat, comb_pseudo, batch_size=batch_size, nb_epoch=4,
validation_data=(conv_val_feat, val_labels))
bn_model.save_weights(path+'models/bn-ps8.h5')
###Output
_____no_output_____
###Markdown
Submit
###Code
def do_clip(arr, mx): return np.clip(arr, (1-mx)/9, mx)
keras.metrics.categorical_crossentropy(val_labels, do_clip(val_preds, 0.93)).eval()
conv_test_feat = load_array(path+'results/conv_test_feat.dat')
preds = bn_model.predict(conv_test_feat, batch_size=batch_size*2)
subm = do_clip(preds,0.93)
subm_name = path+'results/subm.gz'
classes = sorted(batches.class_indices, key=batches.class_indices.get)
submission = pd.DataFrame(subm, columns=classes)
submission.insert(0, 'img', [a[4:] for a in test_filenames])
submission.head()
submission.to_csv(subm_name, index=False, compression='gzip')
FileLink(subm_name)
###Output
_____no_output_____ |
HW1/0816183_1.ipynb | ###Markdown
Artificial Intelligence - Assignment 1 1. DescriptionIn this assignment, you are going to solve the 8-puzzle using any algorithm. The `EightPuzzle` class is written and provided by TAs, you don't need to implement the puzzle yourself, just import and use. 2. How to use `EightPuzzle````pythonfrom eight_puzzle import EightPuzzle importpuzzle = EightPuzzle()puzzle.init() initialize a solvable puzzle statepuzzle.init(seed) initialize a solvable puzzle state using a seedprint(puzzle) show current puzzle state movepuzzle.move('up')puzzle.move('down')puzzle.move('left')puzzle.move('right')if puzzle.state == puzzle.FINAL_STATE: print('You have solved the puzzle') Useful: get the next state after you move in a direction, this won't change the internal state of EightPuzzle.state_after_move_up = puzzle.get_state_after_move(current_state, 'up')```
###Code
# NOTE: PLEASE KEEP THIS CELL NOT MODIFIED!
# download eight_puzzle.py (YOU SHOULD NOT MODIFY eight_puzzle.py)
!wget https://lab.djosix.com/eight_puzzle.py -qO eight_puzzle.py
!sha1sum eight_puzzle.py
from eight_puzzle import EightPuzzle, test
###Output
1b9a6e8af95aed1010690788274f6c453ae88ed6 eight_puzzle.py
###Markdown
3. Implement a search algorithm to solve 8-puzzle
###Code
def myfunc(e):
return e[1]
def solve(p):
h=0
count=0
s=list (p.state)
fs=list (p.FINAL_STATE)
for i in range(0,9):
if s[i]!=fs[i]:
h+=1 #heruristic the numberof misplaced tile
q = [(p.state,count+h)] # A* queue
v = {p.state: []} # map: state -> path to that state
while q:
#count+=1
q.sort(key=myfunc)
#print(q[0])
state = list(q.pop(0))
#print(state[1])
if state[0] == p.FINAL_STATE:
return v[state[0]]
for d in p.DIRECTIONS:
next_state = p.get_state_after_move(state[0], d)
if next_state is not None and next_state not in v:
ns=list(next_state)
v[next_state] = v[state[0]] + [d]
tmp=0
for i in range(0,9):
if ns[i]!=fs[i]:
tmp+=1
q.append((next_state,tmp+count))
return []
###Output
_____no_output_____
###Markdown
4. Test your algorithm
###Code
# NOTE: PLEASE KEEP THIS CELL NOT MODIFIED!
results = test(solve, seed=1, n=100)
###Output
Running tests with seed: 1
Test | seed: 17532741 | puzzle: (8, 5, 6, 0, 4, 7, 2, 1, 3) | elapsed: 0.0339s | solved
Test | seed: 74572392 | puzzle: (1, 7, 2, 0, 6, 4, 3, 8, 5) | elapsed: 0.0094s | solved
Test | seed: 58954043 | puzzle: (1, 6, 0, 3, 2, 5, 4, 7, 8) | elapsed: 0.0261s | solved
Test | seed: 86504015 | puzzle: (8, 1, 3, 4, 0, 5, 6, 7, 2) | elapsed: 0.0085s | solved
Test | seed: 84410468 | puzzle: (4, 5, 8, 2, 7, 0, 1, 6, 3) | elapsed: 0.0288s | solved
Test | seed: 36821992 | puzzle: (1, 3, 8, 0, 6, 7, 2, 4, 5) | elapsed: 0.0081s | solved
Test | seed: 77742434 | puzzle: (5, 1, 0, 8, 2, 7, 4, 6, 3) | elapsed: 0.0337s | solved
Test | seed: 65485614 | puzzle: (1, 0, 3, 8, 2, 5, 6, 4, 7) | elapsed: 0.0058s | solved
Test | seed: 75085546 | puzzle: (2, 3, 5, 4, 0, 8, 7, 1, 6) | elapsed: 0.0260s | solved
Test | seed: 57887538 | puzzle: (6, 7, 8, 0, 5, 4, 3, 1, 2) | elapsed: 0.0195s | solved
Test | seed: 65623117 | puzzle: (6, 4, 7, 5, 0, 3, 2, 1, 8) | elapsed: 0.0376s | solved
Test | seed: 56449792 | puzzle: (5, 2, 1, 8, 3, 4, 7, 6, 0) | elapsed: 0.0187s | solved
Test | seed: 10212701 | puzzle: (2, 7, 3, 4, 5, 0, 1, 8, 6) | elapsed: 0.0190s | solved
Test | seed: 82273400 | puzzle: (2, 7, 6, 4, 1, 0, 8, 3, 5) | elapsed: 0.0113s | solved
Test | seed: 82492277 | puzzle: (1, 7, 4, 5, 0, 2, 8, 6, 3) | elapsed: 0.0326s | solved
Test | seed: 93683337 | puzzle: (4, 7, 0, 1, 2, 5, 8, 6, 3) | elapsed: 0.0190s | solved
Test | seed: 92201978 | puzzle: (4, 3, 0, 5, 2, 1, 7, 8, 6) | elapsed: 0.0198s | solved
Test | seed: 54444516 | puzzle: (5, 8, 4, 0, 6, 1, 2, 3, 7) | elapsed: 0.0775s | solved
Test | seed: 71491422 | puzzle: (6, 1, 4, 2, 5, 0, 3, 7, 8) | elapsed: 0.0256s | solved
Test | seed: 90511200 | puzzle: (8, 3, 0, 2, 5, 7, 6, 4, 1) | elapsed: 0.0178s | solved
Test | seed: 13754738 | puzzle: (0, 2, 4, 5, 1, 8, 3, 6, 7) | elapsed: 0.0090s | solved
Test | seed: 40817065 | puzzle: (5, 8, 0, 3, 7, 2, 4, 1, 6) | elapsed: 0.0155s | solved
Test | seed: 95278064 | puzzle: (8, 0, 1, 5, 6, 7, 3, 4, 2) | elapsed: 0.0594s | solved
Test | seed: 33784892 | puzzle: (6, 5, 4, 3, 8, 2, 1, 7, 0) | elapsed: 0.0206s | solved
Test | seed: 83921254 | puzzle: (4, 0, 8, 6, 5, 3, 1, 2, 7) | elapsed: 0.0302s | solved
Test | seed: 88445010 | puzzle: (7, 3, 5, 1, 2, 6, 4, 8, 0) | elapsed: 0.0407s | solved
Test | seed: 34264416 | puzzle: (2, 5, 4, 7, 0, 3, 1, 6, 8) | elapsed: 0.0151s | solved
Test | seed: 22294532 | puzzle: (7, 0, 2, 5, 4, 6, 1, 3, 8) | elapsed: 0.0761s | solved
Test | seed: 83957878 | puzzle: (5, 7, 2, 8, 0, 1, 6, 3, 4) | elapsed: 0.0232s | solved
Test | seed: 44264986 | puzzle: (5, 8, 0, 2, 1, 4, 6, 7, 3) | elapsed: 0.0304s | solved
Test | seed: 14356590 | puzzle: (4, 1, 3, 8, 6, 2, 5, 0, 7) | elapsed: 0.0089s | solved
Test | seed: 19456105 | puzzle: (3, 0, 4, 1, 2, 7, 5, 6, 8) | elapsed: 0.0261s | solved
Test | seed: 21171496 | puzzle: (6, 1, 4, 0, 7, 5, 3, 8, 2) | elapsed: 0.1140s | solved
Test | seed: 12240178 | puzzle: (6, 2, 3, 5, 7, 0, 4, 1, 8) | elapsed: 0.0162s | solved
Test | seed: 70800468 | puzzle: (1, 0, 5, 6, 2, 3, 4, 7, 8) | elapsed: 0.0167s | solved
Test | seed: 11954206 | puzzle: (1, 3, 2, 7, 8, 5, 4, 0, 6) | elapsed: 0.0162s | solved
Test | seed: 47741579 | puzzle: (5, 1, 0, 7, 4, 3, 6, 2, 8) | elapsed: 0.0673s | solved
Test | seed: 43495272 | puzzle: (7, 6, 8, 5, 2, 0, 1, 4, 3) | elapsed: 0.0730s | solved
Test | seed: 46056483 | puzzle: (3, 0, 2, 5, 7, 4, 6, 1, 8) | elapsed: 0.0103s | solved
Test | seed: 24695314 | puzzle: (8, 1, 5, 4, 0, 3, 7, 6, 2) | elapsed: 0.0224s | solved
Test | seed: 93859516 | puzzle: (4, 1, 6, 8, 5, 0, 2, 3, 7) | elapsed: 0.0926s | solved
Test | seed: 34777959 | puzzle: (0, 4, 3, 7, 1, 6, 5, 2, 8) | elapsed: 0.0091s | solved
Test | seed: 56227654 | puzzle: (3, 6, 5, 1, 7, 0, 4, 2, 8) | elapsed: 0.0170s | solved
Test | seed: 48961302 | puzzle: (1, 7, 8, 3, 4, 0, 6, 2, 5) | elapsed: 0.0933s | solved
Test | seed: 19330196 | puzzle: (5, 0, 3, 6, 8, 1, 4, 2, 7) | elapsed: 0.0837s | solved
Test | seed: 32477484 | puzzle: (8, 3, 4, 1, 5, 0, 7, 2, 6) | elapsed: 0.0084s | solved
Test | seed: 31424575 | puzzle: (3, 5, 8, 1, 4, 2, 7, 0, 6) | elapsed: 0.0069s | solved
Test | seed: 44254527 | puzzle: (7, 2, 5, 8, 0, 1, 4, 6, 3) | elapsed: 0.0483s | solved
Test | seed: 80783798 | puzzle: (1, 4, 5, 0, 6, 3, 2, 8, 7) | elapsed: 0.0234s | solved
Test | seed: 32568032 | puzzle: (0, 2, 8, 7, 1, 5, 3, 4, 6) | elapsed: 0.0294s | solved
Test | seed: 98134944 | puzzle: (6, 0, 4, 5, 3, 7, 2, 1, 8) | elapsed: 0.0133s | solved
Test | seed: 46629955 | puzzle: (1, 0, 2, 7, 4, 5, 8, 3, 6) | elapsed: 0.0067s | solved
Test | seed: 97000307 | puzzle: (6, 7, 3, 4, 1, 8, 2, 0, 5) | elapsed: 0.0688s | solved
Test | seed: 49526151 | puzzle: (7, 6, 8, 3, 1, 0, 2, 4, 5) | elapsed: 0.0047s | solved
Test | seed: 71029019 | puzzle: (2, 0, 6, 1, 5, 7, 4, 3, 8) | elapsed: 0.0505s | solved
Test | seed: 53218345 | puzzle: (0, 5, 2, 3, 6, 7, 8, 1, 4) | elapsed: 0.1189s | solved
Test | seed: 76638250 | puzzle: (8, 7, 6, 4, 0, 2, 1, 3, 5) | elapsed: 0.0090s | solved
Test | seed: 73588469 | puzzle: (7, 4, 1, 6, 2, 0, 5, 8, 3) | elapsed: 0.0259s | solved
Test | seed: 25326409 | puzzle: (8, 6, 1, 0, 3, 2, 7, 5, 4) | elapsed: 0.0123s | solved
Test | seed: 13172179 | puzzle: (5, 1, 3, 6, 8, 7, 2, 0, 4) | elapsed: 0.0102s | solved
Test | seed: 51876592 | puzzle: (2, 6, 5, 4, 1, 0, 3, 7, 8) | elapsed: 0.0234s | solved
Test | seed: 61882816 | puzzle: (8, 1, 6, 4, 2, 0, 3, 7, 5) | elapsed: 0.0268s | solved
Test | seed: 56082646 | puzzle: (5, 3, 0, 8, 1, 7, 4, 6, 2) | elapsed: 0.0233s | solved
Test | seed: 66494748 | puzzle: (7, 6, 4, 5, 1, 3, 2, 0, 8) | elapsed: 0.0228s | solved
Test | seed: 35238208 | puzzle: (7, 3, 6, 4, 2, 5, 1, 0, 8) | elapsed: 0.0256s | solved
Test | seed: 44684657 | puzzle: (6, 8, 7, 3, 0, 1, 4, 5, 2) | elapsed: 0.0106s | solved
Test | seed: 24597747 | puzzle: (1, 8, 5, 4, 2, 0, 6, 3, 7) | elapsed: 0.0139s | solved
Test | seed: 44018576 | puzzle: (7, 8, 4, 1, 0, 6, 5, 2, 3) | elapsed: 0.0241s | solved
Test | seed: 78466607 | puzzle: (3, 6, 5, 0, 7, 2, 8, 4, 1) | elapsed: 0.0519s | solved
Test | seed: 38063717 | puzzle: (0, 1, 7, 6, 4, 8, 5, 2, 3) | elapsed: 0.0367s | solved
Test | seed: 91288784 | puzzle: (8, 0, 5, 1, 6, 2, 4, 7, 3) | elapsed: 0.0055s | solved
Test | seed: 67935826 | puzzle: (2, 7, 8, 3, 5, 6, 1, 4, 0) | elapsed: 0.0409s | solved
Test | seed: 12794159 | puzzle: (8, 7, 4, 0, 3, 2, 6, 5, 1) | elapsed: 0.0208s | solved
Test | seed: 40249188 | puzzle: (1, 2, 5, 6, 0, 8, 4, 3, 7) | elapsed: 0.0689s | solved
Test | seed: 12397735 | puzzle: (7, 8, 4, 6, 5, 2, 3, 1, 0) | elapsed: 0.0186s | solved
Test | seed: 63326766 | puzzle: (1, 2, 3, 6, 4, 8, 5, 7, 0) | elapsed: 0.0087s | solved
Test | seed: 29657762 | puzzle: (6, 0, 3, 4, 5, 2, 1, 7, 8) | elapsed: 0.0256s | solved
Test | seed: 14741381 | puzzle: (1, 6, 8, 0, 3, 5, 4, 7, 2) | elapsed: 0.0040s | solved
Test | seed: 31505383 | puzzle: (7, 5, 4, 2, 0, 8, 3, 1, 6) | elapsed: 0.0274s | solved
Test | seed: 69816615 | puzzle: (1, 0, 4, 3, 6, 2, 7, 5, 8) | elapsed: 0.0274s | solved
Test | seed: 77955685 | puzzle: (6, 0, 1, 3, 5, 7, 8, 2, 4) | elapsed: 0.1145s | solved
Test | seed: 67266010 | puzzle: (0, 4, 1, 8, 6, 7, 3, 5, 2) | elapsed: 0.0106s | solved
Test | seed: 83108686 | puzzle: (7, 1, 0, 8, 3, 6, 5, 4, 2) | elapsed: 0.0294s | solved
Test | seed: 39608396 | puzzle: (3, 7, 6, 2, 4, 0, 1, 8, 5) | elapsed: 0.0183s | solved
Test | seed: 94660762 | puzzle: (8, 2, 1, 4, 6, 5, 0, 3, 7) | elapsed: 0.0173s | solved
Test | seed: 79336813 | puzzle: (2, 0, 3, 6, 1, 8, 7, 4, 5) | elapsed: 0.0064s | solved
Test | seed: 70511395 | puzzle: (7, 4, 1, 0, 6, 8, 5, 3, 2) | elapsed: 0.0046s | solved
Test | seed: 39956830 | puzzle: (0, 6, 8, 1, 5, 7, 4, 3, 2) | elapsed: 0.0724s | solved
Test | seed: 80316055 | puzzle: (4, 2, 8, 5, 0, 1, 3, 7, 6) | elapsed: 0.0144s | solved
Test | seed: 97041058 | puzzle: (0, 2, 7, 4, 3, 5, 8, 1, 6) | elapsed: 0.0248s | solved
Test | seed: 14120521 | puzzle: (5, 0, 6, 2, 3, 8, 1, 7, 4) | elapsed: 0.0119s | solved
Test | seed: 63002313 | puzzle: (5, 3, 1, 2, 6, 8, 0, 7, 4) | elapsed: 0.0166s | solved
Test | seed: 87288736 | puzzle: (0, 1, 2, 8, 3, 5, 7, 4, 6) | elapsed: 0.0638s | solved
Test | seed: 53116882 | puzzle: (4, 8, 1, 7, 5, 2, 6, 0, 3) | elapsed: 0.0330s | solved
Test | seed: 98560063 | puzzle: (2, 6, 7, 5, 1, 4, 8, 3, 0) | elapsed: 0.0367s | solved
Test | seed: 94684388 | puzzle: (2, 5, 4, 7, 0, 6, 8, 3, 1) | elapsed: 0.0639s | solved
Test | seed: 67216934 | puzzle: (1, 8, 7, 5, 3, 0, 4, 2, 6) | elapsed: 0.0842s | solved
Test | seed: 17890004 | puzzle: (2, 4, 8, 1, 0, 5, 6, 3, 7) | elapsed: 0.0134s | solved
Test | seed: 50078212 | puzzle: (0, 2, 6, 8, 4, 5, 1, 3, 7) | elapsed: 0.0119s | solved
Test | seed: 26868929 | puzzle: (4, 2, 1, 6, 7, 0, 5, 8, 3) | elapsed: 0.0676s | solved
===> Solved: 100/100
===> Average elapsed time: 0.0312s
|
convolutional-neural-networks/conv-visualization/maxpooling_visualization.ipynb | ###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the size of the patch by a factor of 4. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# defines the convolutional layer, assumes there are 4 grayscale filters
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLU activationA ReLU function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Copy files and install pytorch
###Code
import sys
try:
import torch
except:
import os
os.environ['TCMALLOC_LARGE_ALLOC_REPORT_THRESHOLD']='2000000000'
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!{sys.executable} -m pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision >/dev/null
! curl -s https://codeload.github.com/udacity/deep-learning-v2-pytorch/tar.gz/master | tar -xz --strip=3 deep-learning-v2-pytorch-master/convolutional-neural-networks/conv-visualization/data/ >/dev/null 2>&1
###Output
_____no_output_____
###Markdown
Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.axis('off')
plt.show();
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`. <img src='https://raw.githubusercontent.com/udacity/deep-learning-v2-pytorch/master/convolutional-neural-networks/conv-visualization/notebook_ims/relu_ex.png' height=50% width=50% />
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer);
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import sys
ros_path = '/opt/ros/kinetic/lib/python2.7/dist-packages'
if ros_path in sys.path:
sys.path.remove(ros_path)
import cv2
sys.path.append('/opt/ros/kinetic/lib/python2.7/dist-packages')
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
(4, 4, 4)
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the size of the patch by a factor of 4. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# defines the convolutional layer, assumes there are 4 grayscale filters
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLU activationA ReLU function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
!mkdir data
!wget -c https://github.com/agungsantoso/deep-learning-v2-pytorch/raw/master/convolutional-neural-networks/conv-visualization/data/curved_lane.jpg
!wget -c https://github.com/agungsantoso/deep-learning-v2-pytorch/raw/master/convolutional-neural-networks/conv-visualization/data/bridge_trees_example.jpg
!wget -c https://github.com/agungsantoso/deep-learning-v2-pytorch/raw/master/convolutional-neural-networks/conv-visualization/data/sobel_ops.png
!wget -c https://github.com/agungsantoso/deep-learning-v2-pytorch/raw/master/convolutional-neural-networks/conv-visualization/data/udacity_sdc.png
!wget -c https://github.com/agungsantoso/deep-learning-v2-pytorch/raw/master/convolutional-neural-networks/conv-visualization/data/white_lines.jpg
!mv bridge_trees_example.jpg data/bridge_trees_example.jpg
!mv curved_lane.jpg data/curved_lane.jpg
!mv sobel_ops.png data/sobel_ops.png
!mv udacity_sdc.png data/udacity_sdc.png
!mv white_lines.jpg data/white_lines.jpg
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
# http://pytorch.org/
from os.path import exists
from wheel.pep425tags import get_abbr_impl, get_impl_ver, get_abi_tag
platform = '{}{}-{}'.format(get_abbr_impl(), get_impl_ver(), get_abi_tag())
cuda_output = !ldconfig -p|grep cudart.so|sed -e 's/.*\.\([0-9]*\)\.\([0-9]*\)$/cu\1\2/'
accelerator = cuda_output[0] if exists('/dev/nvidia0') else 'cpu'
!pip install -q http://download.pytorch.org/whl/{accelerator}/torch-0.4.1-{platform}-linux_x86_64.whl torchvision
import torch
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import sys
sys.path.remove('/opt/ros/kinetic/lib/python2.7/dist-packages')
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
The maxpool layer has (2, 2) sized filters. As I've shown in `conv_visualization.ipynb`, with an image of this size (~ 300x200 pixels), such a filter is small enough not to make the features to blurry. Out of interest, let's compare the pooled image with a non-pooled image:
###Code
viz_layer(F.relu(conv_layer))
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_3)
###Output
Filter 1:
[[-1 -1 -1 -1]
[-1 -1 -1 -1]
[ 1 1 1 1]
[ 1 1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
3
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the size of the patch by a factor of 4. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# defines the convolutional layer, assumes there are 4 grayscale filters
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLU activationA ReLU function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
print(gray_img)
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
[[0.13333334 0.13333334 0.11764706 ... 0.24313726 0.24313726 0.28627452]
[0.16078432 0.14117648 0.13725491 ... 0.23921569 0.24705882 0.28627452]
[0.1882353 0.17254902 0.16862746 ... 0.23529412 0.24705882 0.27058825]
...
[0.7176471 0.72156864 0.69411767 ... 0.7921569 0.77254903 0.7411765 ]
[0.7372549 0.74509805 0.6784314 ... 0.77254903 0.76862746 0.7490196 ]
[0.72156864 0.7411765 0.6745098 ... 0.7137255 0.7137255 0.6901961 ]]
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the size of the patch by a factor of 4. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# defines the convolutional layer, assumes there are 4 grayscale filters
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLU activationA ReLU function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
activated_layer.shape
pooled_layer.shape
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
print ( weight )
print (weight.shape[2:])
model = Net(weight)
# print out the layer in the network
print(model)
###Output
tensor([[[[-1., -1., 1., 1.],
[-1., -1., 1., 1.],
[-1., -1., 1., 1.],
[-1., -1., 1., 1.]]],
[[[ 1., 1., -1., -1.],
[ 1., 1., -1., -1.],
[ 1., 1., -1., -1.],
[ 1., 1., -1., -1.]]],
[[[-1., -1., -1., -1.],
[-1., -1., -1., -1.],
[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.]]],
[[[ 1., 1., 1., 1.],
[ 1., 1., 1., 1.],
[-1., -1., -1., -1.],
[-1., -1., -1., -1.]]]])
torch.Size([4, 4])
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
print(gray_img_tensor)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
tensor([[[[0.1333, 0.1333, 0.1176, ..., 0.2431, 0.2431, 0.2863],
[0.1608, 0.1412, 0.1373, ..., 0.2392, 0.2471, 0.2863],
[0.1882, 0.1725, 0.1686, ..., 0.2353, 0.2471, 0.2706],
...,
[0.7176, 0.7216, 0.6941, ..., 0.7922, 0.7725, 0.7412],
[0.7373, 0.7451, 0.6784, ..., 0.7725, 0.7686, 0.7490],
[0.7216, 0.7412, 0.6745, ..., 0.7137, 0.7137, 0.6902]]]])
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters = 4):
fig = plt.figure(figsize = (20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
Bad key axes.color_cycle in file /Users/mohamedabdelbary/.matplotlib/matplotlibrc, line 240 ('axes.color_cycle : 348ABD, A60628, 7A68A6, 467821,D55E00, CC79A7, 56B4E9, 009E73, F0E442, 0072B2 # color cycle for plot lines')
You probably need to get an updated matplotlibrc file from
https://github.com/matplotlib/matplotlib/blob/v3.3.3/matplotlibrc.template
or from the matplotlib source distribution
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1],
[-1, -1, 1, 1],
[-1, -1, 1, 1],
[-1, -1, 1, 1]])
print('Filter shape:', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_1.T
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1:\n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
k_height, k_width = weight.shape[2:]
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height,k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a 2*2 pooling layer
self.pool = nn.MaxPool2d(2,2)
def forward(self, x):
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
pooled_x = self.pool(activated_x)
return conv_x, activated_x, pooled_x
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20,20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
from IPython.core.interactiveshell import InteractiveShell
InteractiveShell.ast_node_interactivity = "all"
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
filter_vals
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
filters
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
#self.pool = nn.AvgPool2d(2,2)
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): AvgPool2d(kernel_size=2, stride=2, padding=0)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
weight
weight.shape[2:]
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODONE: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODONE: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the size of the patch by a factor of 4. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# defines the convolutional layer, assumes there are 4 grayscale filters
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLU activationA ReLU function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN.
###Code
from google.colab import drive
ROOT = "/content/drive"
drive.mount(ROOT)
%cd "/content/drive/My Drive/Learning/deep-learning-v2-pytorch/convolutional-neural-networks/conv-visualization"
###Output
/content/drive/My Drive/Learning/deep-learning-v2-pytorch/convolutional-neural-networks/conv-visualization
###Markdown
Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1],
[-1, -1, 1, 1],
[-1, -1, 1, 1],
[-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_4)
###Output
Filter 1:
[[ 1 1 1 1]
[ 1 1 1 1]
[-1 -1 -1 -1]
[-1 -1 -1 -1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the size of the patch by a factor of 4. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# defines the convolutional layer, assumes there are 4 grayscale filters
# torch.nn.Conv2d(in_channels, out_channels, kernel_size, stride=1, padding=0, dilation=1, groups=1, bias=True)
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLU activationA ReLU function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = '../../../Gharib.jpeg'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 -1 1 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=2, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
viz_layer(conv_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
filter_vals = np.array([[-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, -1, 1, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
_____no_output_____
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, applied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
_____no_output_____
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____
###Markdown
Maxpooling LayerIn this notebook, we add and visualize the output of a maxpooling layer in a CNN. A convolutional layer + activation function, followed by a pooling layer, and a linear layer (to create a desired output size) make up the basic layers of a CNN. Import the image
###Code
import cv2
import matplotlib.pyplot as plt
%matplotlib inline
# TODO: Feel free to try out your own images here by changing img_path
# to a file path to another image on your computer!
img_path = 'data/udacity_sdc.png'
# load color image
bgr_img = cv2.imread(img_path)
# convert to grayscale
gray_img = cv2.cvtColor(bgr_img, cv2.COLOR_BGR2GRAY)
# normalize, rescale entries to lie in [0,1]
gray_img = gray_img.astype("float32")/255
# plot image
plt.imshow(gray_img, cmap='gray')
plt.show()
###Output
_____no_output_____
###Markdown
Define and visualize the filters
###Code
import numpy as np
## TODO: Feel free to modify the numbers here, to try out another filter!
# filter_values contains the coefficients of a filter of size 4 x 4.
filter_vals = np.array([[-1, 0, 0, 1], [-1, -1, 1, 1], [-1, -1, 1, 1], [-1, 0, 0, 1]])
print('Filter shape: ', filter_vals.shape)
# Defining four different filters,
# all of which are linear combinations of the `filter_vals` defined above
# define four filters
filter_1 = filter_vals
filter_2 = -filter_1
filter_3 = filter_1.T
filter_4 = -filter_3
# filters contains 4 configuretions, each representing the four filters that
# we mentioned before--- left/right vertical filter, top/bottom horizontal filter...
filters = np.array([filter_1, filter_2, filter_3, filter_4])
# For an example, print out the values of filter 1
print('Filter 1: \n', filter_1)
###Output
Filter 1:
[[-1 0 0 1]
[-1 -1 1 1]
[-1 -1 1 1]
[-1 0 0 1]]
###Markdown
Define convolutional and pooling layersYou've seen how to define a convolutional layer, next is a:* Pooling layerIn the next cell, we initialize a convolutional layer so that it contains all the created filters. Then add a maxpooling layer, [documented here](http://pytorch.org/docs/stable/_modules/torch/nn/modules/pooling.html), with a kernel size of (2x2) so you can see that the image resolution has been reduced after this step!A maxpooling layer reduces the x-y size of an input and only keeps the most *active* pixel values. Below is an example of a 2x2 pooling kernel, with a stride of 2, appied to a small patch of grayscale pixel values; reducing the x-y size of the patch by a factor of 2. Only the maximum pixel values in 2x2 remain in the new, pooled output.
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
# define a neural network with a convolutional layer with four filters
# AND a pooling layer of size (2, 2)
class Net(nn.Module):
def __init__(self, weight):
super(Net, self).__init__()
# initializes the weights of the convolutional layer to be the weights of the 4 defined filters
k_height, k_width = weight.shape[2:]
# assumes there are 4 grayscale filters
self.conv = nn.Conv2d(1, 4, kernel_size=(k_height, k_width), bias=False)
self.conv.weight = torch.nn.Parameter(weight)
# define a pooling layer
# nn.MaxPool2d(kernel_size, stride=None, padding=0, dilation=1, return_indices=False, ceil_mode=False)
self.pool = nn.MaxPool2d(2, 2)
def forward(self, x):
# calculates the output of a convolutional layer
# pre- and post-activation
conv_x = self.conv(x)
activated_x = F.relu(conv_x)
# applies pooling layer
pooled_x = self.pool(activated_x)
# returns all layers
return conv_x, activated_x, pooled_x
# instantiate the model and set the weights
weight = torch.from_numpy(filters).unsqueeze(1).type(torch.FloatTensor)
model = Net(weight)
# print out the layer in the network
print(model)
###Output
Net(
(conv): Conv2d(1, 4, kernel_size=(4, 4), stride=(1, 1), bias=False)
(pool): MaxPool2d(kernel_size=2, stride=1, padding=0, dilation=1, ceil_mode=False)
)
###Markdown
Visualize the output of each filterFirst, we'll define a helper function, `viz_layer` that takes in a specific layer and number of filters (optional argument), and displays the output of that layer once an image has been passed through.
###Code
# helper function for visualizing the output of a given layer
# default number of filters is 4
def viz_layer(layer, n_filters= 4):
fig = plt.figure(figsize=(20, 20))
for i in range(n_filters):
ax = fig.add_subplot(1, n_filters, i+1)
# grab layer outputs
ax.imshow(np.squeeze(layer[0,i].data.numpy()), cmap='gray')
ax.set_title('Output %s' % str(i+1))
###Output
_____no_output_____
###Markdown
Let's look at the output of a convolutional layer after a ReLu activation function is applied. ReLu activationA ReLu function turns all negative pixel values in 0's (black). See the equation pictured below for input pixel values, `x`.
###Code
# plot original image
plt.imshow(gray_img, cmap='gray')
# visualize all filters
fig = plt.figure(figsize=(12, 6))
fig.subplots_adjust(left=0, right=1.5, bottom=0.8, top=1, hspace=0.05, wspace=0.05)
for i in range(4):
ax = fig.add_subplot(1, 4, i+1, xticks=[], yticks=[])
ax.imshow(filters[i], cmap='gray')
ax.set_title('Filter %s' % str(i+1))
# convert the image into an input Tensor
gray_img_tensor = torch.from_numpy(gray_img).unsqueeze(0).unsqueeze(1)
# get all the layers
conv_layer, activated_layer, pooled_layer = model(gray_img_tensor)
# visualize the output of the activated conv layer
viz_layer(activated_layer)
###Output
_____no_output_____
###Markdown
Visualize the output of the pooling layerThen, take a look at the output of a pooling layer. The pooling layer takes as input the feature maps pictured above and reduces the dimensionality of those maps, by some pooling factor, by constructing a new, smaller image of only the maximum (brightest) values in a given kernel area.Take a look at the values on the x, y axes to see how the image has changed size.
###Code
# visualize the output of the pooling layer
viz_layer(pooled_layer)
###Output
_____no_output_____ |
openmm_simulation/combine_systems.ipynb | ###Markdown
Load protein system:
###Code
#Load up CHARMM-GUI pdb, and create a PARMED structure from it.
prot_pdb = PDBFile('./1xep-processed.pdb')
omm_forcefield = app.ForceField('amber14-all.xml', 'amber14/tip3p.xml')
prot_system = omm_forcefield.createSystem(prot_pdb.topology, rigidWater=False)
prot_structure = parmed.openmm.load_topology(prot_pdb.topology,
prot_system,
xyz=prot_pdb.positions)
###Output
_____no_output_____
###Markdown
Load drug system:
###Code
#Load up the parameterized drug system, and again make it into a parmed structure:
drug_system = XmlSerializer.deserialize(open('./drug_system.xml').read())
drug_pdbfile = PDBFile('./catechol_aligned.pdb')
drug_structure = parmed.openmm.load_topology(drug_pdbfile.topology,
drug_system,
xyz=drug_pdbfile.positions)
###Output
_____no_output_____
###Markdown
Combine:
###Code
#This is the biggest step but it takes 1 second:
complex_structure = prot_structure + drug_structure
###Output
_____no_output_____
###Markdown
Turn back into an openmm system
###Code
#set periodic boundary conditons (you get these from the processed pdb file, 1xep-processed.pdb)
#64.871 64.871 64.871 60.00 60.00 90.00 P 1 1
#note parmed (and PBD files) uses angstrom by default.
complex_structure.box = (64.871, 64.871, 64.871, 60, 60, 90)
#Turn into an OpenMM System object for simulations:
#These settings will be stuck unless you re-run this script! Luckily they're pretty standard settings.
complex_system = complex_structure.createSystem(nonbondedMethod=PME,
nonbondedCutoff=0.9*nanometer,
constraints=HBonds,
rigidWater=True)
complex_structure.save('complex_plus_water.parm7')
###Output
_____no_output_____
###Markdown
Save output as PDB, PSF, and serialized openmm system
###Code
#Save output:
complex_structure.save('./complex_coords.pdb', overwrite=True)
#PSF files don't like having numbered atom types, because at 10,000 VMD fails
#So just set them all to zero.
for a in complex_structure.atoms:
a.type = '0'
complex_structure.save('./complex_struct_.psf', overwrite=True)
with open('./complex_system_.xml', 'w') as f:
f.write(
XmlSerializer.serialize(
complex_system
)
)
###Output
_____no_output_____ |
Notebooks/RadarCOVID-Report/Daily/RadarCOVID-Report-2021-07-09.ipynb | ###Markdown
RadarCOVID-Report Data Extraction
###Code
import datetime
import json
import logging
import os
import shutil
import tempfile
import textwrap
import uuid
import matplotlib.pyplot as plt
import matplotlib.ticker
import numpy as np
import pandas as pd
import pycountry
import retry
import seaborn as sns
%matplotlib inline
current_working_directory = os.environ.get("PWD")
if current_working_directory:
os.chdir(current_working_directory)
sns.set()
matplotlib.rcParams["figure.figsize"] = (15, 6)
extraction_datetime = datetime.datetime.utcnow()
extraction_date = extraction_datetime.strftime("%Y-%m-%d")
extraction_previous_datetime = extraction_datetime - datetime.timedelta(days=1)
extraction_previous_date = extraction_previous_datetime.strftime("%Y-%m-%d")
extraction_date_with_hour = datetime.datetime.utcnow().strftime("%Y-%m-%d@%H")
current_hour = datetime.datetime.utcnow().hour
are_today_results_partial = current_hour != 23
###Output
_____no_output_____
###Markdown
Constants
###Code
from Modules.ExposureNotification import exposure_notification_io
spain_region_country_code = "ES"
germany_region_country_code = "DE"
default_backend_identifier = spain_region_country_code
backend_generation_days = 7 * 2
daily_summary_days = 7 * 4 * 3
daily_plot_days = 7 * 4
tek_dumps_load_limit = daily_summary_days + 1
###Output
_____no_output_____
###Markdown
Parameters
###Code
environment_backend_identifier = os.environ.get("RADARCOVID_REPORT__BACKEND_IDENTIFIER")
if environment_backend_identifier:
report_backend_identifier = environment_backend_identifier
else:
report_backend_identifier = default_backend_identifier
report_backend_identifier
environment_enable_multi_backend_download = \
os.environ.get("RADARCOVID_REPORT__ENABLE_MULTI_BACKEND_DOWNLOAD")
if environment_enable_multi_backend_download:
report_backend_identifiers = None
else:
report_backend_identifiers = [report_backend_identifier]
report_backend_identifiers
environment_invalid_shared_diagnoses_dates = \
os.environ.get("RADARCOVID_REPORT__INVALID_SHARED_DIAGNOSES_DATES")
if environment_invalid_shared_diagnoses_dates:
invalid_shared_diagnoses_dates = environment_invalid_shared_diagnoses_dates.split(",")
else:
invalid_shared_diagnoses_dates = []
invalid_shared_diagnoses_dates
###Output
_____no_output_____
###Markdown
COVID-19 Cases
###Code
report_backend_client = \
exposure_notification_io.get_backend_client_with_identifier(
backend_identifier=report_backend_identifier)
@retry.retry(tries=10, delay=10, backoff=1.1, jitter=(0, 10))
def download_cases_dataframe():
return pd.read_csv("https://raw.githubusercontent.com/owid/covid-19-data/master/public/data/owid-covid-data.csv")
confirmed_df_ = download_cases_dataframe()
confirmed_df_.iloc[0]
confirmed_df = confirmed_df_.copy()
confirmed_df = confirmed_df[["date", "new_cases", "iso_code"]]
confirmed_df.rename(
columns={
"date": "sample_date",
"iso_code": "country_code",
},
inplace=True)
def convert_iso_alpha_3_to_alpha_2(x):
try:
return pycountry.countries.get(alpha_3=x).alpha_2
except Exception as e:
logging.info(f"Error converting country ISO Alpha 3 code '{x}': {repr(e)}")
return None
confirmed_df["country_code"] = confirmed_df.country_code.apply(convert_iso_alpha_3_to_alpha_2)
confirmed_df.dropna(inplace=True)
confirmed_df["sample_date"] = pd.to_datetime(confirmed_df.sample_date, dayfirst=True)
confirmed_df["sample_date"] = confirmed_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_df.sort_values("sample_date", inplace=True)
confirmed_df.tail()
confirmed_days = pd.date_range(
start=confirmed_df.iloc[0].sample_date,
end=extraction_datetime)
confirmed_days_df = pd.DataFrame(data=confirmed_days, columns=["sample_date"])
confirmed_days_df["sample_date_string"] = \
confirmed_days_df.sample_date.dt.strftime("%Y-%m-%d")
confirmed_days_df.tail()
def sort_source_regions_for_display(source_regions: list) -> list:
if report_backend_identifier in source_regions:
source_regions = [report_backend_identifier] + \
list(sorted(set(source_regions).difference([report_backend_identifier])))
else:
source_regions = list(sorted(source_regions))
return source_regions
report_source_regions = report_backend_client.source_regions_for_date(
date=extraction_datetime.date())
report_source_regions = sort_source_regions_for_display(
source_regions=report_source_regions)
report_source_regions
def get_cases_dataframe(source_regions_for_date_function, columns_suffix=None):
source_regions_at_date_df = confirmed_days_df.copy()
source_regions_at_date_df["source_regions_at_date"] = \
source_regions_at_date_df.sample_date.apply(
lambda x: source_regions_for_date_function(date=x))
source_regions_at_date_df.sort_values("sample_date", inplace=True)
source_regions_at_date_df["_source_regions_group"] = source_regions_at_date_df. \
source_regions_at_date.apply(lambda x: ",".join(sort_source_regions_for_display(x)))
source_regions_at_date_df.tail()
#%%
source_regions_for_summary_df_ = \
source_regions_at_date_df[["sample_date", "_source_regions_group"]].copy()
source_regions_for_summary_df_.rename(columns={"_source_regions_group": "source_regions"}, inplace=True)
source_regions_for_summary_df_.tail()
#%%
confirmed_output_columns = ["sample_date", "new_cases", "covid_cases"]
confirmed_output_df = pd.DataFrame(columns=confirmed_output_columns)
for source_regions_group, source_regions_group_series in \
source_regions_at_date_df.groupby("_source_regions_group"):
source_regions_set = set(source_regions_group.split(","))
confirmed_source_regions_set_df = \
confirmed_df[confirmed_df.country_code.isin(source_regions_set)].copy()
confirmed_source_regions_group_df = \
confirmed_source_regions_set_df.groupby("sample_date").new_cases.sum() \
.reset_index().sort_values("sample_date")
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df.merge(
confirmed_days_df[["sample_date_string"]].rename(
columns={"sample_date_string": "sample_date"}),
how="right")
confirmed_source_regions_group_df["new_cases"] = \
confirmed_source_regions_group_df["new_cases"].clip(lower=0)
confirmed_source_regions_group_df["covid_cases"] = \
confirmed_source_regions_group_df.new_cases.rolling(7, min_periods=0).mean().round()
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[confirmed_output_columns]
confirmed_source_regions_group_df = confirmed_source_regions_group_df.replace(0, np.nan)
confirmed_source_regions_group_df.fillna(method="ffill", inplace=True)
confirmed_source_regions_group_df = \
confirmed_source_regions_group_df[
confirmed_source_regions_group_df.sample_date.isin(
source_regions_group_series.sample_date_string)]
confirmed_output_df = confirmed_output_df.append(confirmed_source_regions_group_df)
result_df = confirmed_output_df.copy()
result_df.tail()
#%%
result_df.rename(columns={"sample_date": "sample_date_string"}, inplace=True)
result_df = confirmed_days_df[["sample_date_string"]].merge(result_df, how="left")
result_df.sort_values("sample_date_string", inplace=True)
result_df.fillna(method="ffill", inplace=True)
result_df.tail()
#%%
result_df[["new_cases", "covid_cases"]].plot()
if columns_suffix:
result_df.rename(
columns={
"new_cases": "new_cases_" + columns_suffix,
"covid_cases": "covid_cases_" + columns_suffix},
inplace=True)
return result_df, source_regions_for_summary_df_
confirmed_eu_df, source_regions_for_summary_df = get_cases_dataframe(
report_backend_client.source_regions_for_date)
confirmed_es_df, _ = get_cases_dataframe(
lambda date: [spain_region_country_code],
columns_suffix=spain_region_country_code.lower())
###Output
_____no_output_____
###Markdown
Extract API TEKs
###Code
raw_zip_path_prefix = "Data/TEKs/Raw/"
base_backend_identifiers = [report_backend_identifier]
multi_backend_exposure_keys_df = \
exposure_notification_io.download_exposure_keys_from_backends(
backend_identifiers=report_backend_identifiers,
generation_days=backend_generation_days,
fail_on_error_backend_identifiers=base_backend_identifiers,
save_raw_zip_path_prefix=raw_zip_path_prefix)
multi_backend_exposure_keys_df["region"] = multi_backend_exposure_keys_df["backend_identifier"]
multi_backend_exposure_keys_df.rename(
columns={
"generation_datetime": "sample_datetime",
"generation_date_string": "sample_date_string",
},
inplace=True)
multi_backend_exposure_keys_df.head()
early_teks_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.rolling_period < 144].copy()
early_teks_df["rolling_period_in_hours"] = early_teks_df.rolling_period / 6
early_teks_df[early_teks_df.sample_date_string != extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
early_teks_df[early_teks_df.sample_date_string == extraction_date] \
.rolling_period_in_hours.hist(bins=list(range(24)))
multi_backend_exposure_keys_df = multi_backend_exposure_keys_df[[
"sample_date_string", "region", "key_data"]]
multi_backend_exposure_keys_df.head()
active_regions = \
multi_backend_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
active_regions
multi_backend_summary_df = multi_backend_exposure_keys_df.groupby(
["sample_date_string", "region"]).key_data.nunique().reset_index() \
.pivot(index="sample_date_string", columns="region") \
.sort_index(ascending=False)
multi_backend_summary_df.rename(
columns={"key_data": "shared_teks_by_generation_date"},
inplace=True)
multi_backend_summary_df.rename_axis("sample_date", inplace=True)
multi_backend_summary_df = multi_backend_summary_df.fillna(0).astype(int)
multi_backend_summary_df = multi_backend_summary_df.head(backend_generation_days)
multi_backend_summary_df.head()
def compute_keys_cross_sharing(x):
teks_x = x.key_data_x.item()
common_teks = set(teks_x).intersection(x.key_data_y.item())
common_teks_fraction = len(common_teks) / len(teks_x)
return pd.Series(dict(
common_teks=common_teks,
common_teks_fraction=common_teks_fraction,
))
multi_backend_exposure_keys_by_region_df = \
multi_backend_exposure_keys_df.groupby("region").key_data.unique().reset_index()
multi_backend_exposure_keys_by_region_df["_merge"] = True
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_df.merge(
multi_backend_exposure_keys_by_region_df, on="_merge")
multi_backend_exposure_keys_by_region_combination_df.drop(
columns=["_merge"], inplace=True)
if multi_backend_exposure_keys_by_region_combination_df.region_x.nunique() > 1:
multi_backend_exposure_keys_by_region_combination_df = \
multi_backend_exposure_keys_by_region_combination_df[
multi_backend_exposure_keys_by_region_combination_df.region_x !=
multi_backend_exposure_keys_by_region_combination_df.region_y]
multi_backend_exposure_keys_cross_sharing_df = \
multi_backend_exposure_keys_by_region_combination_df \
.groupby(["region_x", "region_y"]) \
.apply(compute_keys_cross_sharing) \
.reset_index()
multi_backend_cross_sharing_summary_df = \
multi_backend_exposure_keys_cross_sharing_df.pivot_table(
values=["common_teks_fraction"],
columns="region_x",
index="region_y",
aggfunc=lambda x: x.item())
multi_backend_cross_sharing_summary_df
multi_backend_without_active_region_exposure_keys_df = \
multi_backend_exposure_keys_df[multi_backend_exposure_keys_df.region != report_backend_identifier]
multi_backend_without_active_region = \
multi_backend_without_active_region_exposure_keys_df.groupby("region").key_data.nunique().sort_values().index.unique().tolist()
multi_backend_without_active_region
exposure_keys_summary_df = multi_backend_exposure_keys_df[
multi_backend_exposure_keys_df.region == report_backend_identifier]
exposure_keys_summary_df.drop(columns=["region"], inplace=True)
exposure_keys_summary_df = \
exposure_keys_summary_df.groupby(["sample_date_string"]).key_data.nunique().to_frame()
exposure_keys_summary_df = \
exposure_keys_summary_df.reset_index().set_index("sample_date_string")
exposure_keys_summary_df.sort_index(ascending=False, inplace=True)
exposure_keys_summary_df.rename(columns={"key_data": "shared_teks_by_generation_date"}, inplace=True)
exposure_keys_summary_df.head()
###Output
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/core/frame.py:4110: SettingWithCopyWarning:
A value is trying to be set on a copy of a slice from a DataFrame
See the caveats in the documentation: http://pandas.pydata.org/pandas-docs/stable/user_guide/indexing.html#returning-a-view-versus-a-copy
return super().drop(
###Markdown
Dump API TEKs
###Code
tek_list_df = multi_backend_exposure_keys_df[
["sample_date_string", "region", "key_data"]].copy()
tek_list_df["key_data"] = tek_list_df["key_data"].apply(str)
tek_list_df.rename(columns={
"sample_date_string": "sample_date",
"key_data": "tek_list"}, inplace=True)
tek_list_df = tek_list_df.groupby(
["sample_date", "region"]).tek_list.unique().reset_index()
tek_list_df["extraction_date"] = extraction_date
tek_list_df["extraction_date_with_hour"] = extraction_date_with_hour
tek_list_path_prefix = "Data/TEKs/"
tek_list_current_path = tek_list_path_prefix + f"/Current/RadarCOVID-TEKs.json"
tek_list_daily_path = tek_list_path_prefix + f"Daily/RadarCOVID-TEKs-{extraction_date}.json"
tek_list_hourly_path = tek_list_path_prefix + f"Hourly/RadarCOVID-TEKs-{extraction_date_with_hour}.json"
for path in [tek_list_current_path, tek_list_daily_path, tek_list_hourly_path]:
os.makedirs(os.path.dirname(path), exist_ok=True)
tek_list_base_df = tek_list_df[tek_list_df.region == report_backend_identifier]
tek_list_base_df.drop(columns=["extraction_date", "extraction_date_with_hour"]).to_json(
tek_list_current_path,
lines=True, orient="records")
tek_list_base_df.drop(columns=["extraction_date_with_hour"]).to_json(
tek_list_daily_path,
lines=True, orient="records")
tek_list_base_df.to_json(
tek_list_hourly_path,
lines=True, orient="records")
tek_list_base_df.head()
###Output
_____no_output_____
###Markdown
Load TEK Dumps
###Code
import glob
def load_extracted_teks(mode, region=None, limit=None) -> pd.DataFrame:
extracted_teks_df = pd.DataFrame(columns=["region"])
file_paths = list(reversed(sorted(glob.glob(tek_list_path_prefix + mode + "/RadarCOVID-TEKs-*.json"))))
if limit:
file_paths = file_paths[:limit]
for file_path in file_paths:
logging.info(f"Loading TEKs from '{file_path}'...")
iteration_extracted_teks_df = pd.read_json(file_path, lines=True)
extracted_teks_df = extracted_teks_df.append(
iteration_extracted_teks_df, sort=False)
extracted_teks_df["region"] = \
extracted_teks_df.region.fillna(spain_region_country_code).copy()
if region:
extracted_teks_df = \
extracted_teks_df[extracted_teks_df.region == region]
return extracted_teks_df
daily_extracted_teks_df = load_extracted_teks(
mode="Daily",
region=report_backend_identifier,
limit=tek_dumps_load_limit)
daily_extracted_teks_df.head()
exposure_keys_summary_df_ = daily_extracted_teks_df \
.sort_values("extraction_date", ascending=False) \
.groupby("sample_date").tek_list.first() \
.to_frame()
exposure_keys_summary_df_.index.name = "sample_date_string"
exposure_keys_summary_df_["tek_list"] = \
exposure_keys_summary_df_.tek_list.apply(len)
exposure_keys_summary_df_ = exposure_keys_summary_df_ \
.rename(columns={"tek_list": "shared_teks_by_generation_date"}) \
.sort_index(ascending=False)
exposure_keys_summary_df = exposure_keys_summary_df_
exposure_keys_summary_df.head()
###Output
_____no_output_____
###Markdown
Daily New TEKs
###Code
tek_list_df = daily_extracted_teks_df.groupby("extraction_date").tek_list.apply(
lambda x: set(sum(x, []))).reset_index()
tek_list_df = tek_list_df.set_index("extraction_date").sort_index(ascending=True)
tek_list_df.head()
def compute_teks_by_generation_and_upload_date(date):
day_new_teks_set_df = tek_list_df.copy().diff()
try:
day_new_teks_set = day_new_teks_set_df[
day_new_teks_set_df.index == date].tek_list.item()
except ValueError:
day_new_teks_set = None
if pd.isna(day_new_teks_set):
day_new_teks_set = set()
day_new_teks_df = daily_extracted_teks_df[
daily_extracted_teks_df.extraction_date == date].copy()
day_new_teks_df["shared_teks"] = \
day_new_teks_df.tek_list.apply(lambda x: set(x).intersection(day_new_teks_set))
day_new_teks_df["shared_teks"] = \
day_new_teks_df.shared_teks.apply(len)
day_new_teks_df["upload_date"] = date
day_new_teks_df.rename(columns={"sample_date": "generation_date"}, inplace=True)
day_new_teks_df = day_new_teks_df[
["upload_date", "generation_date", "shared_teks"]]
day_new_teks_df["generation_to_upload_days"] = \
(pd.to_datetime(day_new_teks_df.upload_date) -
pd.to_datetime(day_new_teks_df.generation_date)).dt.days
day_new_teks_df = day_new_teks_df[day_new_teks_df.shared_teks > 0]
return day_new_teks_df
shared_teks_generation_to_upload_df = pd.DataFrame()
for upload_date in daily_extracted_teks_df.extraction_date.unique():
shared_teks_generation_to_upload_df = \
shared_teks_generation_to_upload_df.append(
compute_teks_by_generation_and_upload_date(date=upload_date))
shared_teks_generation_to_upload_df \
.sort_values(["upload_date", "generation_date"], ascending=False, inplace=True)
shared_teks_generation_to_upload_df.tail()
today_new_teks_df = \
shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.upload_date == extraction_date].copy()
today_new_teks_df.tail()
if not today_new_teks_df.empty:
today_new_teks_df.set_index("generation_to_upload_days") \
.sort_index().shared_teks.plot.bar()
generation_to_upload_period_pivot_df = \
shared_teks_generation_to_upload_df[
["upload_date", "generation_to_upload_days", "shared_teks"]] \
.pivot(index="upload_date", columns="generation_to_upload_days") \
.sort_index(ascending=False).fillna(0).astype(int) \
.droplevel(level=0, axis=1)
generation_to_upload_period_pivot_df.head()
new_tek_df = tek_list_df.diff().tek_list.apply(
lambda x: len(x) if not pd.isna(x) else None).to_frame().reset_index()
new_tek_df.rename(columns={
"tek_list": "shared_teks_by_upload_date",
"extraction_date": "sample_date_string",}, inplace=True)
new_tek_df.tail()
shared_teks_uploaded_on_generation_date_df = shared_teks_generation_to_upload_df[
shared_teks_generation_to_upload_df.generation_to_upload_days == 0] \
[["upload_date", "shared_teks"]].rename(
columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_teks_uploaded_on_generation_date",
})
shared_teks_uploaded_on_generation_date_df.head()
estimated_shared_diagnoses_df = shared_teks_generation_to_upload_df \
.groupby(["upload_date"]).shared_teks.max().reset_index() \
.sort_values(["upload_date"], ascending=False) \
.rename(columns={
"upload_date": "sample_date_string",
"shared_teks": "shared_diagnoses",
})
invalid_shared_diagnoses_dates_mask = \
estimated_shared_diagnoses_df.sample_date_string.isin(invalid_shared_diagnoses_dates)
estimated_shared_diagnoses_df[invalid_shared_diagnoses_dates_mask] = 0
estimated_shared_diagnoses_df.head()
###Output
_____no_output_____
###Markdown
Hourly New TEKs
###Code
hourly_extracted_teks_df = load_extracted_teks(
mode="Hourly", region=report_backend_identifier, limit=25)
hourly_extracted_teks_df.head()
hourly_new_tek_count_df = hourly_extracted_teks_df \
.groupby("extraction_date_with_hour").tek_list. \
apply(lambda x: set(sum(x, []))).reset_index().copy()
hourly_new_tek_count_df = hourly_new_tek_count_df.set_index("extraction_date_with_hour") \
.sort_index(ascending=True)
hourly_new_tek_count_df["new_tek_list"] = hourly_new_tek_count_df.tek_list.diff()
hourly_new_tek_count_df["new_tek_count"] = hourly_new_tek_count_df.new_tek_list.apply(
lambda x: len(x) if not pd.isna(x) else 0)
hourly_new_tek_count_df.rename(columns={
"new_tek_count": "shared_teks_by_upload_date"}, inplace=True)
hourly_new_tek_count_df = hourly_new_tek_count_df.reset_index()[[
"extraction_date_with_hour", "shared_teks_by_upload_date"]]
hourly_new_tek_count_df.head()
hourly_summary_df = hourly_new_tek_count_df.copy()
hourly_summary_df.set_index("extraction_date_with_hour", inplace=True)
hourly_summary_df = hourly_summary_df.fillna(0).astype(int).reset_index()
hourly_summary_df["datetime_utc"] = pd.to_datetime(
hourly_summary_df.extraction_date_with_hour, format="%Y-%m-%d@%H")
hourly_summary_df.set_index("datetime_utc", inplace=True)
hourly_summary_df = hourly_summary_df.tail(-1)
hourly_summary_df.head()
###Output
_____no_output_____
###Markdown
Official Statistics
###Code
import requests
import pandas.io.json
official_stats_response = requests.get("https://radarcovid.covid19.gob.es/kpi/statistics/basics")
official_stats_response.raise_for_status()
official_stats_df_ = pandas.io.json.json_normalize(official_stats_response.json())
official_stats_df = official_stats_df_.copy()
official_stats_df["date"] = pd.to_datetime(official_stats_df["date"], dayfirst=True)
official_stats_df.head()
official_stats_column_map = {
"date": "sample_date",
"applicationsDownloads.totalAcummulated": "app_downloads_es_accumulated",
"communicatedContagions.totalAcummulated": "shared_diagnoses_es_accumulated",
}
accumulated_suffix = "_accumulated"
accumulated_values_columns = \
list(filter(lambda x: x.endswith(accumulated_suffix), official_stats_column_map.values()))
interpolated_values_columns = \
list(map(lambda x: x[:-len(accumulated_suffix)], accumulated_values_columns))
official_stats_df = \
official_stats_df[official_stats_column_map.keys()] \
.rename(columns=official_stats_column_map)
official_stats_df["extraction_date"] = extraction_date
official_stats_df.head()
official_stats_path = "Data/Statistics/Current/RadarCOVID-Statistics.json"
previous_official_stats_df = pd.read_json(official_stats_path, orient="records", lines=True)
previous_official_stats_df["sample_date"] = pd.to_datetime(previous_official_stats_df["sample_date"], dayfirst=True)
official_stats_df = official_stats_df.append(previous_official_stats_df)
official_stats_df.head()
official_stats_df = official_stats_df[~(official_stats_df.shared_diagnoses_es_accumulated == 0)]
official_stats_df.sort_values("extraction_date", ascending=False, inplace=True)
official_stats_df.drop_duplicates(subset=["sample_date"], keep="first", inplace=True)
official_stats_df.head()
official_stats_stored_df = official_stats_df.copy()
official_stats_stored_df["sample_date"] = official_stats_stored_df.sample_date.dt.strftime("%Y-%m-%d")
official_stats_stored_df.to_json(official_stats_path, orient="records", lines=True)
official_stats_df.drop(columns=["extraction_date"], inplace=True)
official_stats_df = confirmed_days_df.merge(official_stats_df, how="left")
official_stats_df.sort_values("sample_date", ascending=False, inplace=True)
official_stats_df.head()
official_stats_df[accumulated_values_columns] = \
official_stats_df[accumulated_values_columns] \
.astype(float).interpolate(limit_area="inside")
official_stats_df[interpolated_values_columns] = \
official_stats_df[accumulated_values_columns].diff(periods=-1)
official_stats_df.drop(columns="sample_date", inplace=True)
official_stats_df.head()
###Output
_____no_output_____
###Markdown
Data Merge
###Code
result_summary_df = exposure_keys_summary_df.merge(
new_tek_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
shared_teks_uploaded_on_generation_date_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
estimated_shared_diagnoses_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = result_summary_df.merge(
official_stats_df, on=["sample_date_string"], how="outer")
result_summary_df.head()
result_summary_df = confirmed_eu_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df = confirmed_es_df.tail(daily_summary_days).merge(
result_summary_df, on=["sample_date_string"], how="left")
result_summary_df.head()
result_summary_df["sample_date"] = pd.to_datetime(result_summary_df.sample_date_string)
result_summary_df = result_summary_df.merge(source_regions_for_summary_df, how="left")
result_summary_df.set_index(["sample_date", "source_regions"], inplace=True)
result_summary_df.drop(columns=["sample_date_string"], inplace=True)
result_summary_df.sort_index(ascending=False, inplace=True)
result_summary_df.head()
with pd.option_context("mode.use_inf_as_na", True):
result_summary_df = result_summary_df.fillna(0).astype(int)
result_summary_df["teks_per_shared_diagnosis"] = \
(result_summary_df.shared_teks_by_upload_date / result_summary_df.shared_diagnoses).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case"] = \
(result_summary_df.shared_diagnoses / result_summary_df.covid_cases).fillna(0)
result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(result_summary_df.shared_diagnoses_es / result_summary_df.covid_cases_es).fillna(0)
result_summary_df.head(daily_plot_days)
def compute_aggregated_results_summary(days) -> pd.DataFrame:
aggregated_result_summary_df = result_summary_df.copy()
aggregated_result_summary_df["covid_cases_for_ratio"] = \
aggregated_result_summary_df.covid_cases.mask(
aggregated_result_summary_df.shared_diagnoses == 0, 0)
aggregated_result_summary_df["covid_cases_for_ratio_es"] = \
aggregated_result_summary_df.covid_cases_es.mask(
aggregated_result_summary_df.shared_diagnoses_es == 0, 0)
aggregated_result_summary_df = aggregated_result_summary_df \
.sort_index(ascending=True).fillna(0).rolling(days).agg({
"covid_cases": "sum",
"covid_cases_es": "sum",
"covid_cases_for_ratio": "sum",
"covid_cases_for_ratio_es": "sum",
"shared_teks_by_generation_date": "sum",
"shared_teks_by_upload_date": "sum",
"shared_diagnoses": "sum",
"shared_diagnoses_es": "sum",
}).sort_index(ascending=False)
with pd.option_context("mode.use_inf_as_na", True):
aggregated_result_summary_df = aggregated_result_summary_df.fillna(0).astype(int)
aggregated_result_summary_df["teks_per_shared_diagnosis"] = \
(aggregated_result_summary_df.shared_teks_by_upload_date /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case"] = \
(aggregated_result_summary_df.shared_diagnoses /
aggregated_result_summary_df.covid_cases_for_ratio).fillna(0)
aggregated_result_summary_df["shared_diagnoses_per_covid_case_es"] = \
(aggregated_result_summary_df.shared_diagnoses_es /
aggregated_result_summary_df.covid_cases_for_ratio_es).fillna(0)
return aggregated_result_summary_df
aggregated_result_with_7_days_window_summary_df = compute_aggregated_results_summary(days=7)
aggregated_result_with_7_days_window_summary_df.head()
last_7_days_summary = aggregated_result_with_7_days_window_summary_df.to_dict(orient="records")[1]
last_7_days_summary
aggregated_result_with_14_days_window_summary_df = compute_aggregated_results_summary(days=13)
last_14_days_summary = aggregated_result_with_14_days_window_summary_df.to_dict(orient="records")[1]
last_14_days_summary
###Output
_____no_output_____
###Markdown
Report Results
###Code
display_column_name_mapping = {
"sample_date": "Sample\u00A0Date\u00A0(UTC)",
"source_regions": "Source Countries",
"datetime_utc": "Timestamp (UTC)",
"upload_date": "Upload Date (UTC)",
"generation_to_upload_days": "Generation to Upload Period in Days",
"region": "Backend",
"region_x": "Backend\u00A0(A)",
"region_y": "Backend\u00A0(B)",
"common_teks": "Common TEKs Shared Between Backends",
"common_teks_fraction": "Fraction of TEKs in Backend (A) Available in Backend (B)",
"covid_cases": "COVID-19 Cases (Source Countries)",
"shared_teks_by_generation_date": "Shared TEKs by Generation Date (Source Countries)",
"shared_teks_by_upload_date": "Shared TEKs by Upload Date (Source Countries)",
"shared_teks_uploaded_on_generation_date": "Shared TEKs Uploaded on Generation Date (Source Countries)",
"shared_diagnoses": "Shared Diagnoses (Source Countries – Estimation)",
"teks_per_shared_diagnosis": "TEKs Uploaded per Shared Diagnosis (Source Countries)",
"shared_diagnoses_per_covid_case": "Usage Ratio (Source Countries)",
"covid_cases_es": "COVID-19 Cases (Spain)",
"app_downloads_es": "App Downloads (Spain – Official)",
"shared_diagnoses_es": "Shared Diagnoses (Spain – Official)",
"shared_diagnoses_per_covid_case_es": "Usage Ratio (Spain)",
}
summary_columns = [
"covid_cases",
"shared_teks_by_generation_date",
"shared_teks_by_upload_date",
"shared_teks_uploaded_on_generation_date",
"shared_diagnoses",
"teks_per_shared_diagnosis",
"shared_diagnoses_per_covid_case",
"covid_cases_es",
"app_downloads_es",
"shared_diagnoses_es",
"shared_diagnoses_per_covid_case_es",
]
summary_percentage_columns= [
"shared_diagnoses_per_covid_case_es",
"shared_diagnoses_per_covid_case",
]
###Output
_____no_output_____
###Markdown
Daily Summary Table
###Code
result_summary_df_ = result_summary_df.copy()
result_summary_df = result_summary_df[summary_columns]
result_summary_with_display_names_df = result_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
result_summary_with_display_names_df
###Output
_____no_output_____
###Markdown
Daily Summary Plots
###Code
result_plot_summary_df = result_summary_df.head(daily_plot_days)[summary_columns] \
.droplevel(level=["source_regions"]) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping)
summary_ax_list = result_plot_summary_df.sort_index(ascending=True).plot.bar(
title=f"Daily Summary",
rot=45, subplots=True, figsize=(15, 30), legend=False)
ax_ = summary_ax_list[0]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.95)
_ = ax_.set_xticklabels(sorted(result_plot_summary_df.index.strftime("%Y-%m-%d").tolist()))
for percentage_column in summary_percentage_columns:
percentage_column_index = summary_columns.index(percentage_column)
summary_ax_list[percentage_column_index].yaxis \
.set_major_formatter(matplotlib.ticker.PercentFormatter(1.0))
###Output
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning:
The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.
layout[ax.rowNum, ax.colNum] = ax.get_visible()
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:307: MatplotlibDeprecationWarning:
The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.
layout[ax.rowNum, ax.colNum] = ax.get_visible()
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning:
The rowNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().rowspan.start instead.
if not layout[ax.rowNum + 1, ax.colNum]:
/opt/hostedtoolcache/Python/3.8.10/x64/lib/python3.8/site-packages/pandas/plotting/_matplotlib/tools.py:313: MatplotlibDeprecationWarning:
The colNum attribute was deprecated in Matplotlib 3.2 and will be removed two minor releases later. Use ax.get_subplotspec().colspan.start instead.
if not layout[ax.rowNum + 1, ax.colNum]:
###Markdown
Daily Generation to Upload Period Table
###Code
display_generation_to_upload_period_pivot_df = \
generation_to_upload_period_pivot_df \
.head(backend_generation_days)
display_generation_to_upload_period_pivot_df \
.head(backend_generation_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping)
fig, generation_to_upload_period_pivot_table_ax = plt.subplots(
figsize=(12, 1 + 0.6 * len(display_generation_to_upload_period_pivot_df)))
generation_to_upload_period_pivot_table_ax.set_title(
"Shared TEKs Generation to Upload Period Table")
sns.heatmap(
data=display_generation_to_upload_period_pivot_df
.rename_axis(columns=display_column_name_mapping)
.rename_axis(index=display_column_name_mapping),
fmt=".0f",
annot=True,
ax=generation_to_upload_period_pivot_table_ax)
generation_to_upload_period_pivot_table_ax.get_figure().tight_layout()
###Output
_____no_output_____
###Markdown
Hourly Summary Plots
###Code
hourly_summary_ax_list = hourly_summary_df \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.plot.bar(
title=f"Last 24h Summary",
rot=45, subplots=True, legend=False)
ax_ = hourly_summary_ax_list[-1]
ax_.get_figure().tight_layout()
ax_.get_figure().subplots_adjust(top=0.9)
_ = ax_.set_xticklabels(sorted(hourly_summary_df.index.strftime("%Y-%m-%d@%H").tolist()))
###Output
_____no_output_____
###Markdown
Publish Results
###Code
github_repository = os.environ.get("GITHUB_REPOSITORY")
if github_repository is None:
github_repository = "pvieito/Radar-STATS"
github_project_base_url = "https://github.com/" + github_repository
display_formatters = {
display_column_name_mapping["teks_per_shared_diagnosis"]: lambda x: f"{x:.2f}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case"]: lambda x: f"{x:.2%}" if x != 0 else "",
display_column_name_mapping["shared_diagnoses_per_covid_case_es"]: lambda x: f"{x:.2%}" if x != 0 else "",
}
general_columns = \
list(filter(lambda x: x not in display_formatters, display_column_name_mapping.values()))
general_formatter = lambda x: f"{x}" if x != 0 else ""
display_formatters.update(dict(map(lambda x: (x, general_formatter), general_columns)))
daily_summary_table_html = result_summary_with_display_names_df \
.head(daily_plot_days) \
.rename_axis(index=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.to_html(formatters=display_formatters)
multi_backend_summary_table_html = multi_backend_summary_df \
.head(daily_plot_days) \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(formatters=display_formatters)
def format_multi_backend_cross_sharing_fraction(x):
if pd.isna(x):
return "-"
elif round(x * 100, 1) == 0:
return ""
else:
return f"{x:.1%}"
multi_backend_cross_sharing_summary_table_html = multi_backend_cross_sharing_summary_df \
.rename_axis(columns=display_column_name_mapping) \
.rename(columns=display_column_name_mapping) \
.rename_axis(index=display_column_name_mapping) \
.to_html(
classes="table-center",
formatters=display_formatters,
float_format=format_multi_backend_cross_sharing_fraction)
multi_backend_cross_sharing_summary_table_html = \
multi_backend_cross_sharing_summary_table_html \
.replace("<tr>","<tr style=\"text-align: center;\">")
extraction_date_result_summary_df = \
result_summary_df[result_summary_df.index.get_level_values("sample_date") == extraction_date]
extraction_date_result_hourly_summary_df = \
hourly_summary_df[hourly_summary_df.extraction_date_with_hour == extraction_date_with_hour]
covid_cases = \
extraction_date_result_summary_df.covid_cases.item()
shared_teks_by_generation_date = \
extraction_date_result_summary_df.shared_teks_by_generation_date.item()
shared_teks_by_upload_date = \
extraction_date_result_summary_df.shared_teks_by_upload_date.item()
shared_diagnoses = \
extraction_date_result_summary_df.shared_diagnoses.item()
teks_per_shared_diagnosis = \
extraction_date_result_summary_df.teks_per_shared_diagnosis.item()
shared_diagnoses_per_covid_case = \
extraction_date_result_summary_df.shared_diagnoses_per_covid_case.item()
shared_teks_by_upload_date_last_hour = \
extraction_date_result_hourly_summary_df.shared_teks_by_upload_date.sum().astype(int)
display_source_regions = ", ".join(report_source_regions)
if len(report_source_regions) == 1:
display_brief_source_regions = report_source_regions[0]
else:
display_brief_source_regions = f"{len(report_source_regions)} 🇪🇺"
def get_temporary_image_path() -> str:
return os.path.join(tempfile.gettempdir(), str(uuid.uuid4()) + ".png")
def save_temporary_plot_image(ax):
if isinstance(ax, np.ndarray):
ax = ax[0]
media_path = get_temporary_image_path()
ax.get_figure().savefig(media_path)
return media_path
def save_temporary_dataframe_image(df):
import dataframe_image as dfi
df = df.copy()
df_styler = df.style.format(display_formatters)
media_path = get_temporary_image_path()
dfi.export(df_styler, media_path)
return media_path
summary_plots_image_path = save_temporary_plot_image(
ax=summary_ax_list)
summary_table_image_path = save_temporary_dataframe_image(
df=result_summary_with_display_names_df)
hourly_summary_plots_image_path = save_temporary_plot_image(
ax=hourly_summary_ax_list)
multi_backend_summary_table_image_path = save_temporary_dataframe_image(
df=multi_backend_summary_df)
generation_to_upload_period_pivot_table_image_path = save_temporary_plot_image(
ax=generation_to_upload_period_pivot_table_ax)
###Output
[0709/230944.500629:WARNING:headless_browser_main_parts.cc(106)] Cannot create Pref Service with no user data dir.
[0709/230944.552279:ERROR:gpu_init.cc(440)] Passthrough is not supported, GL is swiftshader
###Markdown
Save Results
###Code
report_resources_path_prefix = "Data/Resources/Current/RadarCOVID-Report-"
result_summary_df.to_csv(
report_resources_path_prefix + "Summary-Table.csv")
result_summary_df.to_html(
report_resources_path_prefix + "Summary-Table.html")
hourly_summary_df.to_csv(
report_resources_path_prefix + "Hourly-Summary-Table.csv")
multi_backend_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Summary-Table.csv")
multi_backend_cross_sharing_summary_df.to_csv(
report_resources_path_prefix + "Multi-Backend-Cross-Sharing-Summary-Table.csv")
generation_to_upload_period_pivot_df.to_csv(
report_resources_path_prefix + "Generation-Upload-Period-Table.csv")
_ = shutil.copyfile(
summary_plots_image_path,
report_resources_path_prefix + "Summary-Plots.png")
_ = shutil.copyfile(
summary_table_image_path,
report_resources_path_prefix + "Summary-Table.png")
_ = shutil.copyfile(
hourly_summary_plots_image_path,
report_resources_path_prefix + "Hourly-Summary-Plots.png")
_ = shutil.copyfile(
multi_backend_summary_table_image_path,
report_resources_path_prefix + "Multi-Backend-Summary-Table.png")
_ = shutil.copyfile(
generation_to_upload_period_pivot_table_image_path,
report_resources_path_prefix + "Generation-Upload-Period-Table.png")
###Output
_____no_output_____
###Markdown
Publish Results as JSON
###Code
def generate_summary_api_results(df: pd.DataFrame) -> list:
api_df = df.reset_index().copy()
api_df["sample_date_string"] = \
api_df["sample_date"].dt.strftime("%Y-%m-%d")
api_df["source_regions"] = \
api_df["source_regions"].apply(lambda x: x.split(","))
return api_df.to_dict(orient="records")
summary_api_results = \
generate_summary_api_results(df=result_summary_df)
today_summary_api_results = \
generate_summary_api_results(df=extraction_date_result_summary_df)[0]
summary_results = dict(
backend_identifier=report_backend_identifier,
source_regions=report_source_regions,
extraction_datetime=extraction_datetime,
extraction_date=extraction_date,
extraction_date_with_hour=extraction_date_with_hour,
last_hour=dict(
shared_teks_by_upload_date=shared_teks_by_upload_date_last_hour,
shared_diagnoses=0,
),
today=today_summary_api_results,
last_7_days=last_7_days_summary,
last_14_days=last_14_days_summary,
daily_results=summary_api_results)
summary_results = \
json.loads(pd.Series([summary_results]).to_json(orient="records"))[0]
with open(report_resources_path_prefix + "Summary-Results.json", "w") as f:
json.dump(summary_results, f, indent=4)
###Output
_____no_output_____
###Markdown
Publish on README
###Code
with open("Data/Templates/README.md", "r") as f:
readme_contents = f.read()
readme_contents = readme_contents.format(
extraction_date_with_hour=extraction_date_with_hour,
github_project_base_url=github_project_base_url,
daily_summary_table_html=daily_summary_table_html,
multi_backend_summary_table_html=multi_backend_summary_table_html,
multi_backend_cross_sharing_summary_table_html=multi_backend_cross_sharing_summary_table_html,
display_source_regions=display_source_regions)
with open("README.md", "w") as f:
f.write(readme_contents)
###Output
_____no_output_____
###Markdown
Publish on Twitter
###Code
enable_share_to_twitter = os.environ.get("RADARCOVID_REPORT__ENABLE_PUBLISH_ON_TWITTER")
github_event_name = os.environ.get("GITHUB_EVENT_NAME")
if enable_share_to_twitter and github_event_name == "schedule" and \
(shared_teks_by_upload_date_last_hour or not are_today_results_partial):
import tweepy
twitter_api_auth_keys = os.environ["RADARCOVID_REPORT__TWITTER_API_AUTH_KEYS"]
twitter_api_auth_keys = twitter_api_auth_keys.split(":")
auth = tweepy.OAuthHandler(twitter_api_auth_keys[0], twitter_api_auth_keys[1])
auth.set_access_token(twitter_api_auth_keys[2], twitter_api_auth_keys[3])
api = tweepy.API(auth)
summary_plots_media = api.media_upload(summary_plots_image_path)
summary_table_media = api.media_upload(summary_table_image_path)
generation_to_upload_period_pivot_table_image_media = api.media_upload(generation_to_upload_period_pivot_table_image_path)
media_ids = [
summary_plots_media.media_id,
summary_table_media.media_id,
generation_to_upload_period_pivot_table_image_media.media_id,
]
if are_today_results_partial:
today_addendum = " (Partial)"
else:
today_addendum = ""
def format_shared_diagnoses_per_covid_case(value) -> str:
if value == 0:
return "–"
return f"≤{value:.2%}"
display_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=shared_diagnoses_per_covid_case)
display_last_14_days_shared_diagnoses_per_covid_case = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case"])
display_last_14_days_shared_diagnoses_per_covid_case_es = \
format_shared_diagnoses_per_covid_case(value=last_14_days_summary["shared_diagnoses_per_covid_case_es"])
status = textwrap.dedent(f"""
#RadarCOVID – {extraction_date_with_hour}
Today{today_addendum}:
- Uploaded TEKs: {shared_teks_by_upload_date:.0f} ({shared_teks_by_upload_date_last_hour:+d} last hour)
- Shared Diagnoses: ≤{shared_diagnoses:.0f}
- Usage Ratio: {display_shared_diagnoses_per_covid_case}
Last 14 Days:
- Usage Ratio (Estimation): {display_last_14_days_shared_diagnoses_per_covid_case}
- Usage Ratio (Official): {display_last_14_days_shared_diagnoses_per_covid_case_es}
Info: {github_project_base_url}#documentation
""")
status = status.encode(encoding="utf-8")
api.update_status(status=status, media_ids=media_ids)
###Output
_____no_output_____ |
01_Getting_&_Knowing_Your_Data/World Food Facts/Exercises_with_solutions.ipynb | ###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the csv file and assign it to a dataframe called food
###Code
food = pd.read_csv('/Users/guilhermeoliveira/Desktop/world-food-facts/FoodFacts.csv')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2723: DtypeWarning: Columns (0,3,5,27,36) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print food.shape #will give you both (observations/rows, columns)
print food.shape[1] #will give you only the columns number
#OR
food.info() #Columns: 159 entries
###Output
(65503, 159)
159
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 65503 entries, 0 to 65502
Columns: 159 entries, code to nutrition_score_uk_100g
dtypes: float64(103), object(56)
memory usage: 79.5+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
연습1 단계1 https://www.kaggle.com/openfoodfacts/world-food-facts/data로 간다. Step 2. 데이터를 다운 받아서 압축을 푼다.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
단계 3. tsv파일을 불러와서 food라는 이름의 데이터 프레임에 저장한다. (잊지 말고 넘파이와 판다스를 import한다)
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
단계 4. 처음 5개 열(column)을 본다.
###Code
food.head()
###Output
_____no_output_____
###Markdown
단계 5. 데이터 셋에 있는 관측치는 모두 몇 개인가?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
단계 6. 데이터 셋에 있는 열의 수는 몇 개인가?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
단계 7. 모든 열의 이름을 출력해본다
###Code
food.columns
###Output
_____no_output_____
###Markdown
단계 8. 105번째 열의 이름name은 무엇인가?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
단계 9. 105번째 열의 타입은 무엇인가?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
단계 10. 데이터는 어떻게 색인 되어 있는가?
###Code
food.index
###Output
_____no_output_____
###Markdown
단계 11. 19번째 관측치(observation)의 상품명(product name)은 무엇인가?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
_____no_output_____
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
_____no_output_____
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your DataCheck out [World Food Facts Exercises Video Tutorial](https://youtu.be/_jCSK4cMcVw) to watch a data scientist go through the exercises Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of ~~observations~~ rows in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____
###Markdown
Ex1 - Getting and knowing your Data Step 1. Go to https://www.kaggle.com/openfoodfacts/world-food-facts/data Step 2. Download the dataset to your computer and unzip it.
###Code
import pandas as pd
import numpy as np
###Output
_____no_output_____
###Markdown
Step 3. Use the tsv file and assign it to a dataframe called food
###Code
food = pd.read_csv('~/Desktop/en.openfoodfacts.org.products.tsv', sep='\t')
###Output
//anaconda/lib/python2.7/site-packages/IPython/core/interactiveshell.py:2717: DtypeWarning: Columns (0,3,5,19,20,24,25,26,27,28,36,37,38,39,48) have mixed types. Specify dtype option on import or set low_memory=False.
interactivity=interactivity, compiler=compiler, result=result)
###Markdown
Step 4. See the first 5 entries
###Code
food.head()
###Output
_____no_output_____
###Markdown
Step 5. What is the number of observations in the dataset?
###Code
food.shape #will give you both (observations/rows, columns)
food.shape[0] #will give you only the observations/rows number
###Output
_____no_output_____
###Markdown
Step 6. What is the number of columns in the dataset?
###Code
print(food.shape) #will give you both (observations/rows, columns)
print(food.shape[1]) #will give you only the columns number
#OR
food.info() #Columns: 163 entries
###Output
(356027, 163)
163
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 356027 entries, 0 to 356026
Columns: 163 entries, code to water-hardness_100g
dtypes: float64(107), object(56)
memory usage: 442.8+ MB
###Markdown
Step 7. Print the name of all the columns.
###Code
food.columns
###Output
_____no_output_____
###Markdown
Step 8. What is the name of 105th column?
###Code
food.columns[104]
###Output
_____no_output_____
###Markdown
Step 9. What is the type of the observations of the 105th column?
###Code
food.dtypes['-glucose_100g']
###Output
_____no_output_____
###Markdown
Step 10. How is the dataset indexed?
###Code
food.index
###Output
_____no_output_____
###Markdown
Step 11. What is the product name of the 19th observation?
###Code
food.values[18][7]
###Output
_____no_output_____ |
Data wrangling .ipynb | ###Markdown
**Space X Falcon 9 First Stage Landing Prediction** Lab 2: Data wrangling Estimated time needed: **60** minutes In this lab, we will perform some Exploratory Data Analysis (EDA) to find some patterns in the data and determine what would be the label for training supervised models.In the data set, there are several different cases where the booster did not land successfully. Sometimes a landing was attempted but failed due to an accident; for example, True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed on a drone ship False ASDS means the mission outcome was unsuccessfully landed on a drone ship.In this lab we will mainly convert those outcomes into Training Labels with `1` means the booster successfully landed `0` means it was unsuccessful. Falcon 9 first stage will land successfully ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/landing\_1.gif) Several examples of an unsuccessful landing are shown here: ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/crash.gif) ObjectivesPerform exploratory Data Analysis and determine Training Labels* Exploratory Data Analysis* Determine Training Labels *** Import Libraries and Define Auxiliary Functions We will import the following libraries.
###Code
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
###Output
_____no_output_____
###Markdown
Data Analysis Load Space X dataset, from last section.
###Code
df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_1.csv")
df.head(10)
###Output
_____no_output_____
###Markdown
Identify and calculate the percentage of the missing values in each attribute
###Code
df.isnull().sum()/df.count()*100
###Output
_____no_output_____
###Markdown
Identify which columns are numerical and categorical:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
TASK 1: Calculate the number of launches on each siteThe data contains several Space X launch facilities: Cape Canaveral Space Launch Complex 40 VAFB SLC 4E , Vandenberg Air Force Base Space Launch Complex 4E (SLC-4E), Kennedy Space Center Launch Complex 39A KSC LC 39A .The location of each Launch Is placed in the column LaunchSite Next, let's see the number of launches for each site.Use the method value_counts() on the column LaunchSite to determine the number of launches on each site:
###Code
# Apply value_counts() on column LaunchSite
df["LaunchSite"].value_counts()
###Output
_____no_output_____
###Markdown
Each launch aims to an dedicated orbit, and here are some common orbit types: * LEO: Low Earth orbit (LEO)is an Earth-centred orbit with an altitude of 2,000 km (1,200 mi) or less (approximately one-third of the radius of Earth),\[1] or with at least 11.25 periods per day (an orbital period of 128 minutes or less) and an eccentricity less than 0.25.\[2] Most of the manmade objects in outer space are in LEO \[1].* VLEO: Very Low Earth Orbits (VLEO) can be defined as the orbits with a mean altitude below 450 km. Operating in these orbits can provide a number of benefits to Earth observation spacecraft as the spacecraft operates closer to the observation\[2].* GTO A geosynchronous orbit is a high Earth orbit that allows satellites to match Earth's rotation. Located at 22,236 miles (35,786 kilometers) above Earth's equator, this position is a valuable spot for monitoring weather, communications and surveillance. Because the satellite orbits at the same speed that the Earth is turning, the satellite seems to stay in place over a single longitude, though it may drift north to south,” NASA wrote on its Earth Observatory website \[3] .* SSO (or SO): It is a Sun-synchronous orbit also called a heliosynchronous orbit is a nearly polar orbit around a planet, in which the satellite passes over any given point of the planet's surface at the same local mean solar time \[4] .* ES-L1 :At the Lagrange points the gravitational forces of the two large bodies cancel out in such a way that a small object placed in orbit there is in equilibrium relative to the center of mass of the large bodies. L1 is one such point between the sun and the earth \[5] .* HEO A highly elliptical orbit, is an elliptic orbit with high eccentricity, usually referring to one around Earth \[6].* ISS A modular space station (habitable artificial satellite) in low Earth orbit. It is a multinational collaborative project between five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada) \[7] * MEO Geocentric orbits ranging in altitude from 2,000 km (1,200 mi) to just below geosynchronous orbit at 35,786 kilometers (22,236 mi). Also known as an intermediate circular orbit. These are "most commonly at 20,200 kilometers (12,600 mi), or 20,650 kilometers (12,830 mi), with an orbital period of 12 hours \[8] * HEO Geocentric orbits above the altitude of geosynchronous orbit (35,786 km or 22,236 mi) \[9] * GEO It is a circular geosynchronous orbit 35,786 kilometres (22,236 miles) above Earth's equator and following the direction of Earth's rotation \[10] * PO It is one type of satellites in which a satellite passes above or nearly above both poles of the body being orbited (usually a planet such as the Earth \[11] some are shown in the following plot: ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/Orbits.png) TASK 2: Calculate the number and occurrence of each orbit Use the method .value_counts() to determine the number and occurrence of each orbit in the column Orbit
###Code
# Apply value_counts on Orbit column
df["Orbit"].value_counts()
###Output
_____no_output_____
###Markdown
TASK 3: Calculate the number and occurence of mission outcome per orbit type Use the method value_counts() to determine the number and occurrence of each orbit in the column Outcome , then assign it to the variable landing_outcomes:
###Code
# landing_outcomes = values on Outcome column
landing_outcomes = df["Outcome"].value_counts()
landing_outcomes
###Output
_____no_output_____
###Markdown
True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed to a drone ship False ASDS means the mission outcome was unsuccessfully landed to a drone ship. None ASDS and None None these represent a failure to land.
###Code
for i,outcome in enumerate(landing_outcomes.keys()):
print(i,outcome)
###Output
0 True ASDS
1 None None
2 True RTLS
3 False ASDS
4 True Ocean
5 False Ocean
6 None ASDS
7 False RTLS
###Markdown
We create a set of outcomes where the second stage did not land successfully:
###Code
bad_outcomes=set(landing_outcomes.keys()[[1,3,5,6,7]])
bad_outcomes
###Output
_____no_output_____
###Markdown
TASK 4: Create a landing outcome label from Outcome column Using the Outcome, create a list where the element is zero if the corresponding row in Outcome is in the set bad_outcome; otherwise, it's one. Then assign it to the variable landing_class:
###Code
# landing_class = 0 if bad_outcome
# landing_class = 1 otherwise
def onehot(item):
if item in bad_outcomes:
return 0
else:
return 1
landing_class = df["Outcome"].apply(onehot)
landing_class
###Output
_____no_output_____
###Markdown
This variable will represent the classification variable that represents the outcome of each launch. If the value is zero, the first stage did not land successfully; one means the first stage landed Successfully
###Code
df['Class']=landing_class
df[['Class']].head(8)
df.head(5)
###Output
_____no_output_____
###Markdown
We can use the following line of code to determine the success rate:
###Code
df["Class"].mean()
###Output
_____no_output_____
###Markdown
**Space X Falcon 9 First Stage Landing Prediction** Lab 2: Data wrangling Estimated time needed: **60** minutes In this lab, we will perform some Exploratory Data Analysis (EDA) to find some patterns in the data and determine what would be the label for training supervised models.In the data set, there are several different cases where the booster did not land successfully. Sometimes a landing was attempted but failed due to an accident; for example, True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed on a drone ship False ASDS means the mission outcome was unsuccessfully landed on a drone ship.In this lab we will mainly convert those outcomes into Training Labels with `1` means the booster successfully landed `0` means it was unsuccessful. Falcon 9 first stage will land successfully ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/landing\_1.gif) Several examples of an unsuccessful landing are shown here: ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/crash.gif) ObjectivesPerform exploratory Data Analysis and determine Training Labels* Exploratory Data Analysis* Determine Training Labels *** Import Libraries and Define Auxiliary Functions We will import the following libraries.
###Code
# Pandas is a software library written for the Python programming language for data manipulation and analysis.
import pandas as pd
#NumPy is a library for the Python programming language, adding support for large, multi-dimensional arrays and matrices, along with a large collection of high-level mathematical functions to operate on these arrays
import numpy as np
###Output
_____no_output_____
###Markdown
Data Analysis Load Space X dataset, from last section.
###Code
df=pd.read_csv("https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBM-DS0321EN-SkillsNetwork/datasets/dataset_part_1.csv")
df.head(10)
###Output
_____no_output_____
###Markdown
Identify and calculate the percentage of the missing values in each attribute
###Code
df.isnull().sum()/df.count()*100
###Output
_____no_output_____
###Markdown
Identify which columns are numerical and categorical:
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
TASK 1: Calculate the number of launches on each siteThe data contains several Space X launch facilities: Cape Canaveral Space Launch Complex 40 VAFB SLC 4E , Vandenberg Air Force Base Space Launch Complex 4E (SLC-4E), Kennedy Space Center Launch Complex 39A KSC LC 39A .The location of each Launch Is placed in the column LaunchSite Next, let's see the number of launches for each site.Use the method value_counts() on the column LaunchSite to determine the number of launches on each site:
###Code
# Apply value_counts() on column LaunchSite
df["LaunchSite"].value_counts()
###Output
_____no_output_____
###Markdown
Each launch aims to an dedicated orbit, and here are some common orbit types: * LEO: Low Earth orbit (LEO)is an Earth-centred orbit with an altitude of 2,000 km (1,200 mi) or less (approximately one-third of the radius of Earth),\[1] or with at least 11.25 periods per day (an orbital period of 128 minutes or less) and an eccentricity less than 0.25.\[2] Most of the manmade objects in outer space are in LEO \[1].* VLEO: Very Low Earth Orbits (VLEO) can be defined as the orbits with a mean altitude below 450 km. Operating in these orbits can provide a number of benefits to Earth observation spacecraft as the spacecraft operates closer to the observation\[2].* GTO A geosynchronous orbit is a high Earth orbit that allows satellites to match Earth's rotation. Located at 22,236 miles (35,786 kilometers) above Earth's equator, this position is a valuable spot for monitoring weather, communications and surveillance. Because the satellite orbits at the same speed that the Earth is turning, the satellite seems to stay in place over a single longitude, though it may drift north to south,” NASA wrote on its Earth Observatory website \[3] .* SSO (or SO): It is a Sun-synchronous orbit also called a heliosynchronous orbit is a nearly polar orbit around a planet, in which the satellite passes over any given point of the planet's surface at the same local mean solar time \[4] .* ES-L1 :At the Lagrange points the gravitational forces of the two large bodies cancel out in such a way that a small object placed in orbit there is in equilibrium relative to the center of mass of the large bodies. L1 is one such point between the sun and the earth \[5] .* HEO A highly elliptical orbit, is an elliptic orbit with high eccentricity, usually referring to one around Earth \[6].* ISS A modular space station (habitable artificial satellite) in low Earth orbit. It is a multinational collaborative project between five participating space agencies: NASA (United States), Roscosmos (Russia), JAXA (Japan), ESA (Europe), and CSA (Canada) \[7] * MEO Geocentric orbits ranging in altitude from 2,000 km (1,200 mi) to just below geosynchronous orbit at 35,786 kilometers (22,236 mi). Also known as an intermediate circular orbit. These are "most commonly at 20,200 kilometers (12,600 mi), or 20,650 kilometers (12,830 mi), with an orbital period of 12 hours \[8] * HEO Geocentric orbits above the altitude of geosynchronous orbit (35,786 km or 22,236 mi) \[9] * GEO It is a circular geosynchronous orbit 35,786 kilometres (22,236 miles) above Earth's equator and following the direction of Earth's rotation \[10] * PO It is one type of satellites in which a satellite passes above or nearly above both poles of the body being orbited (usually a planet such as the Earth \[11] some are shown in the following plot: ![](https://cf-courses-data.s3.us.cloud-object-storage.appdomain.cloud/IBMDeveloperSkillsNetwork-DS0701EN-SkillsNetwork/api/Images/Orbits.png) TASK 2: Calculate the number and occurrence of each orbit Use the method .value_counts() to determine the number and occurrence of each orbit in the column Orbit
###Code
# Apply value_counts on Orbit column
df["Orbit"].value_counts()
###Output
_____no_output_____
###Markdown
TASK 3: Calculate the number and occurence of mission outcome per orbit type Use the method value_counts() to determine the number and occurrence of each orbit in the column Outcome , then assign it to the variable landing_outcomes:
###Code
# landing_outcomes = values on Outcome column
landing_outcomes = df["Outcome"].value_counts()
landing_outcomes
###Output
_____no_output_____
###Markdown
True Ocean means the mission outcome was successfully landed to a specific region of the ocean while False Ocean means the mission outcome was unsuccessfully landed to a specific region of the ocean. True RTLS means the mission outcome was successfully landed to a ground pad False RTLS means the mission outcome was unsuccessfully landed to a ground pad.True ASDS means the mission outcome was successfully landed to a drone ship False ASDS means the mission outcome was unsuccessfully landed to a drone ship. None ASDS and None None these represent a failure to land.
###Code
for i,outcome in enumerate(landing_outcomes.keys()):
print(i,outcome)
###Output
0 True ASDS
1 None None
2 True RTLS
3 False ASDS
4 True Ocean
5 False Ocean
6 None ASDS
7 False RTLS
###Markdown
We create a set of outcomes where the second stage did not land successfully:
###Code
bad_outcomes=set(landing_outcomes.keys()[[1,3,5,6,7]])
bad_outcomes
###Output
_____no_output_____
###Markdown
TASK 4: Create a landing outcome label from Outcome column Using the Outcome, create a list where the element is zero if the corresponding row in Outcome is in the set bad_outcome; otherwise, it's one. Then assign it to the variable landing_class:
###Code
# landing_class = 0 if bad_outcome
# landing_class = 1 otherwise
def onehot(item):
if item in bad_outcomes:
return 0
else:
return 1
landing_class = df["Outcome"].apply(onehot)
landing_class
###Output
_____no_output_____
###Markdown
This variable will represent the classification variable that represents the outcome of each launch. If the value is zero, the first stage did not land successfully; one means the first stage landed Successfully
###Code
df['Class']=landing_class
df[['Class']].head(8)
df.head(5)
###Output
_____no_output_____
###Markdown
We can use the following line of code to determine the success rate:
###Code
df["Class"].mean()
###Output
_____no_output_____ |
notebooks/Computer Vision/blob-detection-using-opencv.ipynb | ###Markdown
Blob Detection Using OpenCV
###Code
# Standard imports
import cv2
import numpy as np;
# images
ROOT = "/home/jeff/Jupyter-Notebooks/DataSets/Images/"
IMAGE = "vending_machine.png"
# Read image
im = cv2.imread(ROOT + IMAGE, cv2.IMREAD_GRAYSCALE)
# Setup SimpleBlobDetector parameters.
params = cv2.SimpleBlobDetector_Params()
# Change thresholds
params.minThreshold = 10;
params.maxThreshold = 200;
# Filter by Area
params.filterByArea = True
params.minArea = 1500
# Filter by Circularity
params.filterByCircularity = True
params.minCircularity = 0.1
# Filter by Convexity
params.filterByConvexity = True
params.minConvexity = 0.87
# Filter by Inertia
params.filterByInertia = True
params.minInertiaRatio = 0.01
# Create a detector with the parameters
ver = (cv2.__version__).split('.')
if int(ver[0]) < 3 :
detector = cv2.SimpleBlobDetector(params)
else :
detector = cv2.SimpleBlobDetector_create(params)
# Detect blobs
keypoints = detector.detect(im)
# Draw detected blobs as red circles.
# cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS ensures the size of the circle corresponds to the size of blob
im_with_keypoints = cv2.drawKeypoints(im, keypoints, np.array([]), (0,0,255), cv2.DRAW_MATCHES_FLAGS_DRAW_RICH_KEYPOINTS)
# Show keypoints
cv2.imshow("Keypoints", im_with_keypoints)
# wait for key entry of ESC or 'q' to exit
while True:
k = cv2.waitKey(20) & 0xFF
if k == 27 or k == ord('q'):
break
# clean up
cv2.destroyAllWindows()
###Output
_____no_output_____ |
nb/01_data-acquisition.ipynb | ###Markdown
Data aquisitionDownload data from https://coinmarketcap.com/ and store it into a CSV.
###Code
# import needed modules
# standard modules
import os
import sys
import asyncio
import datetime
import re
import json
import codecs
import io
import concurrent.futures
import csv
from pprint import pprint
# pypy modules
import requests
import lxml.html
###Output
_____no_output_____
###Markdown
Constants
###Code
# main url of coinmarketcap
COINMARKETCAP_URL = "https://coinmarketcap.com"
# url to download the currencies (coins/tokens)
CURRENCY_URL = COINMARKETCAP_URL + "/{}/views/all"
# url to get historical data per coin
SLUG_URL = COINMARKETCAP_URL + "/currencies/{}/historical-data/?start={}&end={}"
# directory of this projects root, jupyter must be started accordingly
ROOT_DIR = os.path.abspath(os.path.join(os.getcwd(), ".."))
# directory for the cache
CACHE_DIR = os.path.join(ROOT_DIR, "cache")
# resulting csv file holding **all** data
DATA_CSV = os.path.join(ROOT_DIR, "coinmarketcap.csv")
###Output
_____no_output_____
###Markdown
Functions from third-party modules Parse the coin/token list returned as HTML codeSource: https://github.com/prouast/coinmarketcap-scraper
###Code
def parseCoinTokenList(html, type):
"""Parse the information returned by requestList for view 'all'."""
data = []
docRoot = lxml.html.fromstring(html)
rows = docRoot.cssselect(
"table#{0}-all > tbody > tr".format(type))
for row in rows:
datum = {}
fields = row.cssselect("td")
# Name and slug
nameField = fields[1].cssselect("a")[0]
datum['name'] = nameField.text_content().strip()
datum['slug'] = nameField.attrib['href'].replace(
'/currencies/', '').replace('/', '').strip()
# Symbol
datum['symbol'] = fields[2].text_content().strip()
# Explorer link
supplyFieldPossible = fields[5].cssselect("a")
if len(supplyFieldPossible) > 0:
datum['explorer_link'] = supplyFieldPossible[0].attrib['href']
else:
datum['explorer_link'] = ''
data.append(datum)
return data
###Output
_____no_output_____
###Markdown
Parse the historical dataSource: https://github.com/jhogan4288/coinmarketcap-history
###Code
def parseHistoricalData(html):
"""
Extract the price history from the HTML.
The CoinMarketCap historical data page has just one HTML table.
This table contains the data we want.
It's got one header row with the column names.
We need to derive the "average" price for the provided data.
"""
head = re.search(r'<thead>(.*)</thead>', html, re.DOTALL).group(1)
header = re.findall(r'<th .*>([\w ]+)</th>', head)
body = re.search(r'<tbody>(.*)</tbody>', html, re.DOTALL).group(1)
raw_rows = re.findall(r'<tr[^>]*>' +
r'\s*<td[^>]*>([^<]+)</td>'*7 +
r'\s*</tr>', body)
# strip commas
rows = []
for row in raw_rows:
row = [ re.sub(",", "", field) for field in row ]
row = [ re.sub("-", "0", field) for field in row ]
# convert date
row[0]= datetime.datetime.strptime(row[0], "%b %d %Y").strftime("%Y%m%d")
rows.append(row)
return header, rows
###Output
_____no_output_____
###Markdown
Helper functions
###Code
# convert between datetime object and string representation "YYYYMMDD"
string2datetime = lambda s: datetime.datetime.strptime(s, "%Y%m%d")
datetime2string = lambda dt: dt.strftime("%Y%m%d")
# create directory if it does not exist
def mkdir(path):
if not os.path.exists(path):
os.makedirs(path)
###Output
_____no_output_____
###Markdown
Next, a cache is introduced. Data downloaded from *coinmarketcap.com* are stored in this cache.With the cache it is not needed to download every time all historical data.
###Code
# load cached data
def loadCache(path):
path = os.path.abspath(path)
try:
with codecs.open(path, "r", encoding="UTF8") as fp:
return fp.read()
except OSError:
pass
return ""
# save cached data
def saveCache(path, content):
path = os.path.abspath(path)
mkdir(os.path.dirname(path))
with codecs.open(path, "w", encoding="UTF8") as fp:
fp.write(content)
###Output
_____no_output_____
###Markdown
Provide a `main` method for asyncio. This function downloads the *urls* parallel and stores the *responses* for further processing.
###Code
async def main(urls, responses):
with concurrent.futures.ThreadPoolExecutor(max_workers=20) as executor:
loop = asyncio.get_event_loop()
futures = [
loop.run_in_executor(
None,
requests.get,
url,
)
for url in urls
]
for response in await asyncio.gather(*futures):
responses.append(response)
###Output
_____no_output_____
###Markdown
Download function for coins/tokens
###Code
# use the cache, encode currency data with json
def decodeJson(rawData):
try:
return json.loads(rawData)
except json.decoder.JSONDecodeError:
pass
return []
def encodeJson(pythonDict):
return json.dumps(pythonDict, indent=4)
# download coins and tokens from the cache
def getCoinsAndTokens(forceUpdate=False):
# forceUpdate: do not use the cache
# cache path for coins
cacheCoins = os.path.join(CACHE_DIR, "coins.json")
# cache path for tokens
cacheTokens = os.path.join(CACHE_DIR, "tokens.json")
coins, tokens = [], []
if not forceUpdate:
# load coins and tokens from the cache
coins = decodeJson(loadCache(cacheCoins))
tokens = decodeJson(loadCache(cacheTokens))
# early return, coins/tokens loaded from the cache
if coins and tokens:
print("Cached: Coins: {}, Tokens: {}".format(len(coins), len(tokens)))
return coins, tokens
# load coins/tokens from the web
# initalize asyncio
loop = asyncio.get_event_loop()
# get urls to be downloaded
urls = [CURRENCY_URL.format(type) for type in ["coins", "tokens"]]
responses = []
# download urls in parallel
loop.run_until_complete(main(urls, responses))
# parse the responses
coins = parseCoinTokenList(responses[0].content, "currencies")
tokens = parseCoinTokenList(responses[1].content, "assets")
# update cache
saveCache(cacheCoins, encodeJson(coins))
saveCache(cacheTokens, encodeJson(tokens))
print("Coins: {}, Tokens: {}".format(len(coins), len(tokens)))
return coins, tokens
###Output
_____no_output_____
###Markdown
Download function for historical data
###Code
# construct/generate the currency url based on the slug
# start/end may be provided, otherwise, the whole history is downloaded
def genCurrencySlugUrl(slug, start=None, end=None):
start = start or string2datetime("20100101")
end = end or datetime.datetime.utcnow() + datetime.timedelta(days=1)
return SLUG_URL.format(slug, datetime2string(start), datetime2string(end))
# get the cache path for a given slug
def getSlugCache(slug):
return os.path.join(CACHE_DIR, "{}.csv".format(slug))
# encode historical data with csv
def encodeCsv(data):
fp = io.StringIO()
writer = csv.writer(fp)
writer.writerows(data)
return fp.getvalue()
def decodeCsv(raw):
reader = csv.reader(raw.splitlines())
return list(reader)
# only keep the date part of the datetime object
striptime = lambda dt: datetime.datetime.combine(dt.date(), datetime.time())
# parse response for a slug and save the data to the cache
def parseResponseSaveCache(slug, response):
# parse historical data
_, rawData = parseHistoricalData(response.content.decode("UTF8"))
# get the cache file
path = getSlugCache(slug)
# load the cache
rows = decodeCsv(loadCache(path))
# append new date
rows.extend(rawData)
# sort by date
rows = sorted(rows, key=lambda r: int(r[0]))
# update the cache
saveCache(path, encodeCsv(rows))
# download **all** historical data of **all** slugs
# use a cache to make it faster on successive runs
# the function returns the number of updated histories
def getHistories(slugs):
# build requests
requests = []
# keep track which request belongs to which slug
slugRequestMap = {}
# current utc time, historical data are update on UTC 00:00:00
utcnow = striptime(datetime.datetime.utcnow())
# for all slugs, prepare the url
for slug in slugs:
path = getSlugCache(slug)
dtCache = None
if os.path.exists(path):
# get the timestamp of the cached file of the slug
st = os.stat(path)
dtCache = datetime.datetime.utcfromtimestamp(st.st_mtime)
dtCache = striptime(dtCache)
# load the cached file
rows = decodeCsv(loadCache(path))
# find the date of the next entry
start = None
if rows:
# get latest date
start = string2datetime(rows[-1][0])
# add one day
start += datetime.timedelta(days=1)
if start:
# if start lies in the future, skip
if start >= utcnow:
continue
# if the cache date current, skip
if dtCache and dtCache >= utcnow:
continue
# build the url for the slug
url = genCurrencySlugUrl(slug, start)
# append to requests
requests.append(url)
# add to inverse mapping
slugRequestMap[url] = slug
# nothing to download, return
if not slugRequestMap:
return 0
# prepare asyncio
loop = asyncio.get_event_loop()
responses = []
while requests:
print("\rRequest to process: {}{}".format(len(requests), " "*20),
flush=True, end="")
# download all requests
loop.run_until_complete(main(requests, responses))
# check responses, try again if it failed
requests = []
for r in responses:
# remove responses
responses.remove(r)
if r.ok:
parseResponseSaveCache(slugRequestMap[r.url], r)
else:
# print("Failed: {}".format(url))
pass
responses = []
print("") # add newline feed
###Output
_____no_output_____
###Markdown
Function to build the final CSV file holding all currency dataThis function reads all cached coin/token data and merges it into a single *csv* file.
###Code
# merge all cached csv into a single csv
def buildAllCurrenciesCsv(allCurrencies):
# count rows
rowCnt = 0
with codecs.open(DATA_CSV, "w", encoding="UTF8") as fp:
writer = csv.writer(fp)
writer.writerow([
"date",
"slug",
"name",
"open",
"high",
"low",
"close",
"volume",
"marketcap"])
# for each currency append to the data file
# and insert *slug* and *name* as column
for currency in allCurrencies:
slug = currency["slug"]
name = currency["name"]
path = getSlugCache(slug)
rows = decodeCsv(loadCache(path))
print("\r{}/{}{}".format(slug, len(rows), " "*20), end="", flush=True)
for row in rows:
writer.writerow([row[0]] + [slug, name] + row[1:])
rowCnt += 1
print("\rCurrencies: {}, rows: {}".format(len(allCurrencies), rowCnt))
print("CACHE: {}".format(CACHE_DIR))
print("DATA: {}".format(DATA_CSV))
# download coins and tokens
coins, tokens = getCoinsAndTokens(forceUpdate=True)
allCurrencies = coins + tokens
# get the slug name from the dicts
slugs = [x["slug"] for x in allCurrencies]
# download historical data
getHistories(slugs)
# always build CSV
buildAllCurrenciesCsv(allCurrencies)
###Output
CACHE: /home/dahuebi/PML/cas-pml-prj/cache
DATA: /home/dahuebi/PML/cas-pml-prj/coinmarketcap.csv
Coins: 917, Tokens: 677
Request to process: 1594
Currencies: 1594, rows: 750954
|
labs/07_analisis_supervisado_regresion/laboratorio_07.ipynb | ###Markdown
MAT281 - Laboratorios N°01 Objetivos del laboratorio* Reforzar conceptos básicos de regresión lineal. Contenidos* [Problema 01](p1) I.- Problema 01 El **cuarteto de Anscombe** comprende cuatro conjuntos de datos que tienen las mismas propiedades estadísticas, pero que evidentemente son distintas al inspeccionar sus gráficos respectivos.Cada conjunto consiste de once puntos (x, y) y fueron construidos por el estadístico F. J. Anscombe. El cuarteto es una demostración de la importancia de mirar gráficamente un conjunto de datos antes de analizarlos.
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(11.7,8.27)})
# cargar datos
df = pd.read_csv(os.path.join("data","anscombe.csv"), sep=",")
df.head()
###Output
_____no_output_____
###Markdown
Basado en la información presentada responda las siguientes preguntas:1. Gráfique mediante un gráfico tipo **scatter** cada grupo. A simple vista, ¿ los grupos son muy distintos entre si?.2. Realice un resumen de las medidas estadísticas más significativas ocuapando el comando **describe** para cada grupo. Interprete.3. Realice un ajuste lineal para cada grupo. Además, grafique los resultados de la regresión lineal para cada grupo. Interprete.4. Calcule los resultados de las métricas para cada grupo. Interprete.5. Es claro que el ajuste lineal para algunos grupos no es el correcto. Existen varias formas de solucionar este problema (eliminar outliers, otros modelos, etc.). Identifique una estrategia para que el modelo de regresión lineal ajuste de mejor manera e implemente otros modelos en los casos que encuentre necesario. 1. Gráfique mediante un gráfico tipo scatter cada grupo. A simple vista, ¿Los grupos son muy distintos entre si?
###Code
fig = plt.figure(figsize=(12, 8)) #esta es la ventana sobre donde se va a plottear
plt.subplot(2,2,1)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_1'])
plt.xlabel('$x_1$')
plt.ylabel('$y_1$')
plt.subplot(2,2,2)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_2'])
plt.xlabel('$x_2$')
plt.ylabel('$y_2$')
plt.subplot(2,2,3)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_3'])
plt.xlabel('$x_3$')
plt.ylabel('$y_3$')
plt.subplot(2,2,4)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_4'])
plt.xlabel('$x_4$')
plt.ylabel('$y_4$')
plt.show()
###Output
_____no_output_____
###Markdown
2. Realice un resumen de las medidas estadísticas más significativas ocuapando el comando describe para cada grupo. Interprete.
###Code
df.groupby(['grupo']).describe()
###Output
_____no_output_____
###Markdown
Se ve que en principio, los estadísticos son similares, la media y la desviación estandar se acercan bastante. Sin embargo, se ve en los gráficos que no son muy parecidos, tal vez el grupo 1 y el 3 se parecen, pero los otros 2 no se acercan a lo que podría ser un modelo lineal, se van a ver casi igual, pero los R^2 cambiarán bastante dependiendo del grupo. El 4to podría tomarse como un x=8 con un outlayer, y el 2do como una suerte de función polinómica. 3. Realice un ajuste lineal para cada grupo. Además, grafique los resultados de la regresión lineal para cada grupo. Interprete.
###Code
#Se importan las librerías a utilizar de modelos lineales
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
#Se crean los 4 modelos con linear regression:
#Modelo 1:
model_rl1 = LinearRegression()
x1 = df[df['grupo'] == 'Grupo_1'][['x']]
y1 = df[df['grupo'] == 'Grupo_1']['y']
X1_train, X1_test, y1_train, y1_test = train_test_split(x1, y1, test_size=0.33, random_state=42)
model_rl1.fit(X1_train,y1_train)
#Modelo 2:
model_rl2 = LinearRegression()
x2 = df[df['grupo'] == 'Grupo_2'][['x']]
y2 = df[df['grupo'] == 'Grupo_2']['y']
X2_train, X2_test, y2_train, y2_test = train_test_split(x2, y2, test_size=0.33, random_state=42)
model_rl2.fit(X2_train,y2_train)
#Modelo 3:
model_rl3 = LinearRegression()
x3 = df[df['grupo'] == 'Grupo_3'][['x']]
y3 = df[df['grupo'] == 'Grupo_3']['y']
X3_train, X3_test, y3_train, y3_test = train_test_split(x3, y3, test_size=0.33, random_state=42)
model_rl3.fit(X3_train,y3_train)
#Modelo 4:
model_rl4 = LinearRegression()
x4 = df[df['grupo'] == 'Grupo_4'][['x']]
y4 = df[df['grupo'] == 'Grupo_4']['y']
X4_train, X4_test, y4_train, y4_test = train_test_split(x4, y4, test_size=0.33, random_state=42)
model_rl4.fit(X4_train,y4_train)
# Lista de coeficientes beta para cada modelo:
beta_1_0 = round(model_rl1.intercept_,4)
beta_1_1 = round(model_rl1.coef_[0],4)
beta_2_0 = round(model_rl2.intercept_,4)
beta_2_1 = round(model_rl2.coef_[0],4)
beta_3_0 = round(model_rl3.intercept_,4)
beta_3_1 = round(model_rl3.coef_[0],4)
beta_4_0 = round(model_rl4.intercept_,4)
beta_4_1 = round(model_rl4.coef_[0],4)
#Se definen los arreglos para graficar:
x1_range = np.arange(2,21,1)
y1_range=[beta_1_0 + beta_1_1*n for n in x1_range]
y2_range=[beta_2_0 + beta_2_1*n for n in x1_range]
y3_range=[beta_3_0 + beta_3_1*n for n in x1_range]
y4_range=[beta_4_0 + beta_4_1*n for n in x1_range]
#Aqui los dataframes:
df_plot1 = pd.DataFrame({'x':x1_range,
'y':y1_range})
df_plot2 = pd.DataFrame({'x':x1_range,
'y':y2_range})
df_plot3 = pd.DataFrame({'x':x1_range,
'y':y3_range})
df_plot4 = pd.DataFrame({'x':x1_range,
'y':y4_range})
fig = plt.figure(figsize=(12, 8)) #Esta es la ventana, tal como antes
#Grafico 1:
plt.subplot(2,2,1)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_1'])
sns.lineplot(x='x', y='y', data=df_plot1,color="red")
plt.xlabel('$x_1$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_1$')
#Grafico 2:
plt.subplot(2,2,2)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_2'])
sns.lineplot(x='x', y='y', data=df_plot2,color="red")
plt.xlabel('$x_2$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_2$')
#Grafico 3:
plt.subplot(2,2,3)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_3'])
sns.lineplot(x='x', y='y', data=df_plot3,color="red")
plt.xlabel('$x_3$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_3$')
#Grafico 4:
plt.subplot(2,2,4)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_4'])
sns.lineplot(x='x', y='y', data=df_plot4,color="red")
plt.xlabel('$x_4$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_4$')
plt.show()
###Output
_____no_output_____
###Markdown
Pasó la hipótesis que se dio en el item 2, las regresiones son similares, pero claramente no se ajustan bien al 2do y al 4to.Se ve también una pendiente bastante mala en el 3ro por el outlayer. 4. Calcule los resultados de las métricas para cada grupo. Interprete.
###Code
from metrics_regression import *
from sklearn.metrics import r2_score
#Metricas del grupo 1:
df_temp = pd.DataFrame({
'y':y1_test,
'yhat': model_rl1.predict(X1_test)
})
df_metrics= summary_metrics(df_temp) #Se crea el dataframe de metricas ahora solo con el grupo 1
df_metrics['r2'] = round(r2_score(y1_test, model_rl1.predict(X1_test)),4)
#Metricas del grupo 2:
df_temp = pd.DataFrame({
'y':y2_test,
'yhat': model_rl2.predict(X2_test)
})
df_metrics_temp = summary_metrics(df_temp)
df_metrics_temp['r2'] = round(r2_score(y2_test, model_rl2.predict(X2_test)),4)
df_metrics=pd.concat([df_metrics,df_metrics_temp]) #Se agrega el dataframe de metricas del grupo 2 al ya existente
#Metricas del grupo 3:
df_temp = pd.DataFrame({
'y':y3_test,
'yhat': model_rl3.predict(X3_test)
})
df_metrics_temp = summary_metrics(df_temp)
df_metrics_temp['r2'] = round(r2_score(y3_test, model_rl3.predict(X3_test)),4)
df_metrics=pd.concat([df_metrics,df_metrics_temp]) #Se agrega el dataframe de metricas del grupo 3 al ya existente
#Metricas del grupo 4:
df_temp = pd.DataFrame({
'y':y4_test,
'yhat': model_rl4.predict(X4_test)
})
df_metrics_temp = summary_metrics(df_temp)
df_metrics_temp['r2'] = round(r2_score(y4_test, model_rl4.predict(X4_test)),4)
df_metrics=pd.concat([df_metrics,df_metrics_temp]) #Se agrega el dataframe de metricas del grupo 4 al ya existente
grupos = pd.Series(['Grupo_1','Grupo_2','Grupo_3', 'Grupo_4']) #Se cambia el indice para mostrar cada grupo
df_metrics.set_index(keys=grupos)
df_metrics
###Output
_____no_output_____
###Markdown
MAT281 - Laboratorios N°01 Objetivos del laboratorio* Reforzar conceptos básicos de regresión lineal. Contenidos* [Problema 01](p1) I.- Problema 01 El **cuarteto de Anscombe** comprende cuatro conjuntos de datos que tienen las mismas propiedades estadísticas, pero que evidentemente son distintas al inspeccionar sus gráficos respectivos.Cada conjunto consiste de once puntos (x, y) y fueron construidos por el estadístico F. J. Anscombe. El cuarteto es una demostración de la importancia de mirar gráficamente un conjunto de datos antes de analizarlos.
###Code
import os
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
sns.set_palette("deep", desat=.6)
sns.set(rc={'figure.figsize':(11.7,8.27)})
# cargar datos
df = pd.read_csv(os.path.join("data","anscombe.csv"), sep=",")
df.head()
###Output
_____no_output_____
###Markdown
Basado en la información presentada responda las siguientes preguntas:1. Gráfique mediante un gráfico tipo **scatter** cada grupo. A simple vista, ¿ los grupos son muy distintos entre si?.2. Realice un resumen de las medidas estadísticas más significativas ocuapando el comando **describe** para cada grupo. Interprete.3. Realice un ajuste lineal para cada grupo. Además, grafique los resultados de la regresión lineal para cada grupo. Interprete.4. Calcule los resultados de las métricas para cada grupo. Interprete.5. Es claro que el ajuste lineal para algunos grupos no es el correcto. Existen varias formas de solucionar este problema (eliminar outliers, otros modelos, etc.). Identifique una estrategia para que el modelo de regresión lineal ajuste de mejor manera e implemente otros modelos en los casos que encuentre necesario. 1. Gráfique mediante un gráfico tipo **scatter** cada grupo. A simple vista, ¿ los grupos son muy distintos entre si?.
###Code
# tamano del grafico
fig = plt.figure(figsize=(12, 8)) # ventana
plt.subplot(2,2,1)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_1'])
plt.xlabel('$x_1$')
plt.ylabel('$y_1$')
plt.subplot(2,2,2)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_2'])
plt.xlabel('$x_2$')
plt.ylabel('$y_2$')
plt.subplot(2,2,3)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_3'])
plt.xlabel('$x_3$')
plt.ylabel('$y_3$')
plt.subplot(2,2,4)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_4'])
plt.xlabel('$x_4$')
plt.ylabel('$y_4$')
plt.show()
###Output
_____no_output_____
###Markdown
Se observa de los graficos que las distribuciones de los datos de cada grupo son notoriamente diferentes 2. Realice un resumen de las medidas estadísticas más significativas ocuapando el comando **describe** para cada grupo. Interprete.
###Code
df.groupby(['grupo']).describe()
###Output
_____no_output_____
###Markdown
Notamos que aunque los graficos eran distintos, las estadisticas de los 4 grupos son sumamente parecidas, lo que hará que los ajustes lineales sean muy parecidos para cada grupo. Aun así, los valores de datos minimos, maximos y como se distribuyen los datos son claramente distintos. 3. Realice un ajuste lineal para cada grupo. Además, grafique los resultados de la regresión lineal para cada grupo. Interprete.
###Code
# importando el modelo de regresión lineal
from sklearn.linear_model import LinearRegression
from sklearn.model_selection import train_test_split
#Crecion de 4 modelos, uno para cada grupo:
#Modelo 1:
model_rl1 = LinearRegression() # Creando el modelo.
x1 = df[df['grupo'] == 'Grupo_1'][['x']]
y1 = df[df['grupo'] == 'Grupo_1']['y']
X1_train, X1_test, y1_train, y1_test = train_test_split(x1, y1, test_size=0.33, random_state=42)
model_rl1.fit(X1_train,y1_train)
#Modelo 2:
model_rl2 = LinearRegression() # Creando el modelo.
x2 = df[df['grupo'] == 'Grupo_2'][['x']]
y2 = df[df['grupo'] == 'Grupo_2']['y']
X2_train, X2_test, y2_train, y2_test = train_test_split(x2, y2, test_size=0.33, random_state=42)
model_rl2.fit(X2_train,y2_train)
#Modelo 3:
model_rl3 = LinearRegression() # Creando el modelo.
x3 = df[df['grupo'] == 'Grupo_3'][['x']]
y3 = df[df['grupo'] == 'Grupo_3']['y']
X3_train, X3_test, y3_train, y3_test = train_test_split(x3, y3, test_size=0.33, random_state=42)
model_rl3.fit(X3_train,y3_train)
#Modelo 4:
model_rl4 = LinearRegression() # Creando el modelo.
x4 = df[df['grupo'] == 'Grupo_4'][['x']]
y4 = df[df['grupo'] == 'Grupo_4']['y']
X4_train, X4_test, y4_train, y4_test = train_test_split(x4, y4, test_size=0.33, random_state=42)
model_rl4.fit(X4_train,y4_train)
# Lista de coeficientes beta para cada modelo:
beta_1_0 = round(model_rl1.intercept_,4)
beta_1_1 = round(model_rl1.coef_[0],4)
beta_2_0 = round(model_rl2.intercept_,4)
beta_2_1 = round(model_rl2.coef_[0],4)
beta_3_0 = round(model_rl3.intercept_,4)
beta_3_1 = round(model_rl3.coef_[0],4)
beta_4_0 = round(model_rl4.intercept_,4)
beta_4_1 = round(model_rl4.coef_[0],4)
#Defincion de arreglos para fraficar cada ajuste:
x1_range = np.arange(2,21,1)
y1_range=[beta_1_0 + beta_1_1*n for n in x1_range]
y2_range=[beta_2_0 + beta_2_1*n for n in x1_range]
y3_range=[beta_3_0 + beta_3_1*n for n in x1_range]
y4_range=[beta_4_0 + beta_4_1*n for n in x1_range]
#Definición de dataFrames para graficar cada ajuste:
df_plot1 = pd.DataFrame({'x':x1_range,
'y':y1_range})
df_plot2 = pd.DataFrame({'x':x1_range,
'y':y2_range})
df_plot3 = pd.DataFrame({'x':x1_range,
'y':y3_range})
df_plot4 = pd.DataFrame({'x':x1_range,
'y':y4_range})
#Se grafica:
fig = plt.figure(figsize=(12, 8)) # ventana
#Grafico 1:
plt.subplot(2,2,1)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_1'])
sns.lineplot(x='x', y='y', data=df_plot1,color="red")
plt.xlabel('$x_1$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_1$')
#Grafico 2:
plt.subplot(2,2,2)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_2'])
sns.lineplot(x='x', y='y', data=df_plot2,color="red")
plt.xlabel('$x_2$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_2$')
#Grafico 3:
plt.subplot(2,2,3)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_3'])
sns.lineplot(x='x', y='y', data=df_plot3,color="red")
plt.xlabel('$x_3$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_3$')
#Grafico 4:
plt.subplot(2,2,4)
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_4'])
sns.lineplot(x='x', y='y', data=df_plot4,color="red")
plt.xlabel('$x_4$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_4$')
plt.show()
###Output
_____no_output_____
###Markdown
Se observa que aunque la distribucion de los datos de cada grupo es claramente distinta, los ajustes lineales de los 4 grupos resultaron practicamente iguales. 4. Calcule los resultados de las métricas para cada grupo. Interprete.
###Code
from metrics_regression import *
from sklearn.metrics import r2_score
#Metricas del grupo 1:
df_temp = pd.DataFrame({
'y':y1_test,
'yhat': model_rl1.predict(X1_test)
})
df_metrics= summary_metrics(df_temp) #Se crea el dataframe de metricas ahora solo con el grupo 1
df_metrics['r2'] = round(r2_score(y1_test, model_rl1.predict(X1_test)),4)
#Metricas del grupo 2:
df_temp = pd.DataFrame({
'y':y2_test,
'yhat': model_rl2.predict(X2_test)
})
df_metrics_temp = summary_metrics(df_temp)
df_metrics_temp['r2'] = round(r2_score(y2_test, model_rl2.predict(X2_test)),4)
df_metrics=pd.concat([df_metrics,df_metrics_temp]) #Se agrega el dataframe de metricas del grupo 2 al ya existente
#Metricas del grupo 3:
df_temp = pd.DataFrame({
'y':y3_test,
'yhat': model_rl3.predict(X3_test)
})
df_metrics_temp = summary_metrics(df_temp)
df_metrics_temp['r2'] = round(r2_score(y3_test, model_rl3.predict(X3_test)),4)
df_metrics=pd.concat([df_metrics,df_metrics_temp]) #Se agrega el dataframe de metricas del grupo 3 al ya existente
#Metricas del grupo 4:
df_temp = pd.DataFrame({
'y':y4_test,
'yhat': model_rl4.predict(X4_test)
})
df_metrics_temp = summary_metrics(df_temp)
df_metrics_temp['r2'] = round(r2_score(y4_test, model_rl4.predict(X4_test)),4)
df_metrics=pd.concat([df_metrics,df_metrics_temp]) #Se agrega el dataframe de metricas del grupo 4 al ya existente
grupos = pd.Series(['Grupo_1','Grupo_2','Grupo_3', 'Grupo_4']) #Se cambia el indice para mostrar cada grupo
df_metrics.set_index(keys=grupos)
df_metrics
###Output
_____no_output_____
###Markdown
Grupo:1 Notamos que los errores se notan normales y no muy alejados del 0. El factor $r^2$ en este caso no está muy cercano al 0. Así, en mi opinión el ajuste correspondiente al grupo 1 es correcto. Grupo:2 Notamos que los errores absolutos son mayores a los del grupo 1, aunque los porcentuales no son muy diferentes. Lo que se destaca es que el factor $r^2$ es mucho menor que el del grupo 1 y además se encuentra cercno al 0, lo que se puede interpretar como que el ajuste lineal de los datos de este grupo no está bien hecho. Grupo:3 Al igual que en 2 los errores absolutos son mayores que en 1 y los porcentuales son cercanos 0. En este caso el factor $r^2$ es aun más bajo que en 2, el ajuste está aun más mal hecho. Grupo:4 Para este grupo los errores absolutos no son muy malos al igual que los porcentuales, pero el factor $r^2$ resultó negativo, lo que indica que la regresión lineal está completamente mal hecha, el ajuste no representa para nada la distribución de los datos. 5. Es claro que el ajuste lineal para algunos grupos no es el correcto. Existen varias formas de solucionar este problema (eliminar outliers, otros modelos, etc.). Identifique una estrategia para que el modelo de regresión lineal ajuste de mejor manera e implemente otros modelos en los casos que encuentre necesario. Grupo 1: Para este grupo de datos podemos ver visualmente que el ajuste parece ser bueno. Al analizar las metricas para este grupo también se observa que el factor $r^2$ por ejemplo tiene un valor de aprox 0,7 y los errores porcentuales también no estan muy alejados del 0. Así, a mi opinión, el ajuste parece ser correcto para este grupo. Grupo 2: Observando el gráfico de este grupo se puede notar claramente que la distribución de los datos no es lineal. Por esto, propongo una regresión polinomica para aproximar los datos de mejor manera:
###Code
from sklearn.preprocessing import PolynomialFeatures
from sklearn.pipeline import make_pipeline
degree=2 #grado del polinomio de regresion
polyreg=make_pipeline(PolynomialFeatures(degree),LinearRegression()) #Creacion del modelo
polyreg.fit(X2_train,y2_train) #Se usan los datos train creados en Pregunta 3
#Definiciones para graficar el ajuste
X_seq = np.linspace(2,18,300).reshape(-1,1)
#Se grafica:
plt.figure(figsize=(8,5))
sns.scatterplot(x='x', y='y', data=df[df['grupo'] == 'Grupo_2'])
plt.plot(X_seq,polyreg.predict(X_seq),color="red")
plt.title("Regresión polinómica de grado "+str(degree))
plt.show()
#Se presentan las nuevas metricas con el ajuste cuadrático
df_temp = pd.DataFrame({
'y':y2_test,
'yhat': polyreg.predict(X2_test)
})
df_metrica2 = summary_metrics(df_temp)
df_metrica2['r2'] = round(r2_score(y2_test, polyreg.predict(X2_test)),4)
df_metrica2.set_index(pd.Series({'Grupo_2':'Grupo_2'}))
###Output
_____no_output_____
###Markdown
Se observa que con el nuevo ajuste, las metricas resultaron perfectas, resulataba ser que la distribucion de los datos correspondia a una función cuadratica. Grupo 3: Observando el comportamiento del ajuste lineal de este grupo se puede notar que hay un outlier que hace que la pendiente del ajuste lineal se aleje de la distribución de los datos del Grupo_3. Con esto, propondgo una estrategia de eliminar el dato anómalo y hacer el ajuste con el resto de datos bien distribuidos:
###Code
#Se elimina el dato anómalo:
df_nuevo = df[df['grupo'] == 'Grupo_3'].drop(24)
#Se crea un nuevo ajuste ahora para los datos sin el outlier:
model_rl3 = LinearRegression()
x3 = df_nuevo[['x']]
y3 = df_nuevo['y']
X3_train, X3_test, y3_train, y3_test = train_test_split(x3, y3, test_size=0.33, random_state=42)
model_rl3.fit(X3_train,y3_train)
#Definición de los coef del ajuste
beta_3_0 = round(model_rl3.intercept_,4)
beta_3_1 = round(model_rl3.coef_[0],4)
#Definciones para Graficar el ajuste:
x_range = np.arange(2,21,1)
y3_range=[beta_3_0 + beta_3_1*n for n in x_range]
df_plot3 = pd.DataFrame({'x':x_range,
'y':y3_range})
#Grafico con el ajuste y los datos de df_nuevo:
fig = plt.figure(figsize=(8, 5)) # ventana
sns.scatterplot(x='x', y='y', data=df_nuevo) #Datos originales sin el outlier
sns.lineplot(x='x', y='y', data=df_plot3,color="red") #Ajuste lineal
plt.xlabel('$x_3$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_3$')
plt.show()
#Se presentan las nuevas metricas con los datos actualizados quitando el outlier
df_temp = pd.DataFrame({
'y':y3_test,
'yhat': model_rl3.predict(X3_test)
})
df_metrica3= summary_metrics(df_temp)
df_metrica3['r2'] = round(r2_score(y3_test, model_rl3.predict(X3_test)),4)
df_metrica3.set_index(pd.Series({'Grupo_3':'Grupo_3'}))
###Output
_____no_output_____
###Markdown
Se observa que al quitar el oulier, el ajuste resulta sumamente bueno visualmente y además contiene errores muy cercanos a 0 y un factor $r^2=1$. Grupo 4: Para este grupo de datos se observa en el grafico que existe una distribucion de datos muy particular, hay un outlier y todos los demás datos se encuentran concentrados en una linea vertical correspondiente a $x=8$. Así, propongo el eliminar el outlier y realizar un ajuste intercambiando los ejes (para obtener un ajuste de pendiente 0 y no $\infty$):
###Code
#Definicion del dataframe sin el outlier
df_nuevo4 = df[df['grupo'] == 'Grupo_4'].drop(40)
#Defincion del nuevo modelo
model_rl4_nuevo = LinearRegression() # Creando el modelo.
x4 = df_nuevo4['x']
y4 = df_nuevo4[['y']]
X4_train, X4_test, y4_train, y4_test = train_test_split(y4, x4, test_size=0.33, random_state=42)
model_rl4_nuevo.fit(X4_train,y4_train)
#Definición de los coef del nuevo ajuste:
beta_4_0_nuevo = round(model_rl4_nuevo.intercept_,4)
beta_4_1_nuevo = round(model_rl4_nuevo.coef_[0],4)
#Definiciones para graficar el ajuste
x_range = np.arange(4,10,1)
y4_range=[beta_4_0_nuevo + beta_4_1_nuevo*n for n in x_range]
df_plot4_nuevo = pd.DataFrame({'x':x_range,
'y':y4_range})
#Se grafica:
plt.figure(figsize=(8,5))
sns.scatterplot(x='x', y='y', data=df_nuevo4)
plt.plot(y4_range, x_range,'r')
plt.xlabel('$x_4$')
plt.xticks([2*x for x in range(1,10)])
plt.ylabel('$y_4$')
plt.show()
#Se presentan las nuevas metricas con los datos actualizados quitando el outlier
df_temp = pd.DataFrame(
{
'y':y4_test,
'yhat': model_rl4_nuevo.predict(X4_test)
}
)
df_metrica4 = summary_metrics(df_temp)
df_metrica4['r2'] = round(r2_score(y4_test, model_rl4_nuevo.predict(X4_test)),4)
df_metrica4.set_index(pd.Series({'Grupo_4':'Grupo_4'}))
df_metrica4
###Output
_____no_output_____ |
LinkedIn/LinkedIn_Send_connections_from_network_to_gsheet.ipynb | ###Markdown
LinkedIn - Send connections from network to gsheet **Tags:** linkedin network connections naas_drivers csv automation content googlesheets **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) Input Import libraries
###Code
from naas_drivers import linkedin, gsheet
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Setup LinkedIn👉 How to get your cookies ?
###Code
# Lindekin cookies
LI_AT = "AQEDARCNSioDe6wmAAABfqF-HR4AAAF-xYqhHlYAtSu7EZZEpFer0UZF-GLuz2DNSz4asOOyCRxPGFjenv37irMObYYgxxxxxxx"
JSESSIONID = "ajax:12XXXXXXXXXXXXXXXXX"
###Output
_____no_output_____
###Markdown
Setup your Google Sheet👉 Get your spreadsheet URL👉 Share your gsheet with our service account to connect : [email protected]👉 Create your sheet before sending data into it
###Code
# Spreadsheet URL
SPREADSHEET_URL = "https://docs.google.com/spreadsheets/d/XXXXXXXXXXXXXXXXXXXX"
# Sheet name
SHEET_NAME = "LK_CONNECTIONS"
###Output
_____no_output_____
###Markdown
Setup Naas
###Code
naas.scheduler.add(cron="0 8 * * *")
#-> To delete your scheduler, please uncomment the line below and execute this cell
# naas.scheduler.delete()
###Output
_____no_output_____
###Markdown
Model Get connections from Google Sheet
###Code
df_gsheet = gsheet.connect(SPREADSHEET_URL).get(sheet_name=SHEET_NAME)
df_gsheet
###Output
_____no_output_____
###Markdown
Get new connections
###Code
def get_new_connections(df_gsheet, key="PROFILE_URN"):
profiles = []
if len(df_gsheet) > 0:
profiles = df_gsheet[key].unique()
else:
df = linkedin.connect(LI_AT, JSESSIONID).network.get_connections(limit=-1)
return df
# Get new
df_new = pd.DataFrame()
update = True
while update:
start = 0
df = linkedin.connect(LI_AT, JSESSIONID).network.get_connections(start=start, count=100, limit=100)
new_profiles = df[key].unique()
for i, p in enumerate(new_profiles):
if p in profiles:
update = False
df = df[:i]
break
start += 100
df_new = pd.concat([df_new, df])
return df_new
df_new = get_new_connections(df_gsheet, key="PROFILE_URN")
df_new
###Output
_____no_output_____
###Markdown
Output Send to Google Sheet
###Code
gsheet.connect(SPREADSHEET_URL).send(df_new,
sheet_name=SHEET_NAME,
append=True)
###Output
_____no_output_____
###Markdown
LinkedIn - Send connections from network to gsheet **Tags:** linkedin network connections naas_drivers csv automation content googlesheets **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) Input Import libraries
###Code
from naas_drivers import linkedin, gsheet
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Setup LinkedIn👉 How to get your cookies ?
###Code
# Lindekin cookies
LI_AT = "AQEDARCNSioDe6wmAAABfqF-HR4AAAF-xYqhHlYAtSu7EZZEpFer0UZF-GLuz2DNSz4asOOyCRxPGFjenv37irMObYYgxxxxxxx"
JSESSIONID = "ajax:12XXXXXXXXXXXXXXXXX"
###Output
_____no_output_____
###Markdown
Setup your Google Sheet👉 Get your spreadsheet URL👉 Share your gsheet with our service account to connect : [email protected]👉 Create your sheet before sending data into it
###Code
# Spreadsheet URL
SPREADSHEET_URL = "https://docs.google.com/spreadsheets/d/XXXXXXXXXXXXXXXXXXXX"
# Sheet name
SHEET_NAME = "LK_CONNECTIONS"
###Output
_____no_output_____
###Markdown
Setup Naas
###Code
naas.scheduler.add(cron="0 8 * * *")
#-> To delete your scheduler, please uncomment the line below and execute this cell
# naas.scheduler.delete()
###Output
_____no_output_____
###Markdown
Model Get connections from Google Sheet
###Code
df_gsheet = gsheet.connect(SPREADSHEET_URL).get(sheet_name=SHEET_NAME)
df_gsheet
###Output
_____no_output_____
###Markdown
Get new connections
###Code
def get_new_connections(df_gsheet, key="PROFILE_URN"):
profiles = []
if len(df_gsheet) > 0:
profiles = df_gsheet[key].unique()
else:
df = linkedin.connect(LI_AT, JSESSIONID).network.get_connections(limit=-1)
return df
# Get new
df_new = pd.DataFrame()
update = True
while update:
start = 0
df = linkedin.connect(LI_AT, JSESSIONID).network.get_connections(start=start, count=100, limit=100)
new_profiles = df[key].unique()
for i, p in enumerate(new_profiles):
if p in profiles:
update = False
df = df[:i]
break
start += 100
df_new = pd.concat([df_new, df])
return df_new
df_new = get_new_connections(df_gsheet, key="PROFILE_URN")
df_new
###Output
_____no_output_____
###Markdown
Output Send to Google Sheet
###Code
gsheet.connect(SPREADSHEET_URL).send(df_new,
sheet_name=SHEET_NAME,
append=True)
###Output
_____no_output_____
###Markdown
LinkedIn - Send connections from network to gsheet **Tags:** linkedin network connections naas_drivers csv automation **Author:** [Florent Ravenel](https://www.linkedin.com/in/florent-ravenel/) Input Import libraries
###Code
from naas_drivers import linkedin, gsheet
import naas
import pandas as pd
###Output
_____no_output_____
###Markdown
Setup LinkedIn👉 How to get your cookies ?
###Code
# Lindekin cookies
LI_AT = "AQEDARCNSioDe6wmAAABfqF-HR4AAAF-xYqhHlYAtSu7EZZEpFer0UZF-GLuz2DNSz4asOOyCRxPGFjenv37irMObYYgxxxxxxx"
JSESSIONID = "ajax:12XXXXXXXXXXXXXXXXX"
###Output
_____no_output_____
###Markdown
Setup your Google Sheet👉 Get your spreadsheet URL👉 Share your gsheet with our service account to connect : [email protected]👉 Create your sheet before sending data into it
###Code
# Spreadsheet URL
SPREADSHEET_URL = "https://docs.google.com/spreadsheets/d/XXXXXXXXXXXXXXXXXXXX"
# Sheet name
SHEET_NAME = "LK_CONNECTIONS"
###Output
_____no_output_____
###Markdown
Setup Naas
###Code
naas.scheduler.add(cron="0 8 * * *")
#-> To delete your scheduler, please uncomment the line below and execute this cell
# naas.scheduler.delete()
###Output
_____no_output_____
###Markdown
Model Get connections from Google Sheet
###Code
df_gsheet = gsheet.connect(SPREADSHEET_URL).get(sheet_name=SHEET_NAME)
df_gsheet
###Output
_____no_output_____
###Markdown
Get new connections
###Code
def get_new_connections(df_gsheet, key="PROFILE_URN"):
profiles = []
if len(df_gsheet) > 0:
profiles = df_gsheet[key].unique()
else:
df = linkedin.connect(LI_AT, JSESSIONID).network.get_connections(limit=-1)
return df
# Get new
df_new = pd.DataFrame()
update = True
while update:
start = 0
df = linkedin.connect(LI_AT, JSESSIONID).network.get_connections(start=start, count=100, limit=100)
new_profiles = df[key].unique()
for i, p in enumerate(new_profiles):
if p in profiles:
update = False
df = df[:i]
break
start += 100
df_new = pd.concat([df_new, df])
return df_new
df_new = get_new_connections(df_gsheet, key="PROFILE_URN")
df_new
###Output
_____no_output_____
###Markdown
Output Send to Google Sheet
###Code
gsheet.connect(SPREADSHEET_URL).send(df_new,
sheet_name=SHEET_NAME,
append=True)
###Output
_____no_output_____ |
OneHotEncoder/OneHotEncoder.ipynb | ###Markdown
Preprocessing
###Code
# Import our dependencies
from sklearn.model_selection import train_test_split
from sklearn.preprocessing import StandardScaler, OneHotEncoder
import pandas as pd
import tensorflow as tf
# Import and read the charity_data.csv.
import pandas as pd
application_df = pd.read_csv("../Resources/charity_data.csv")
application_df.head()
# Drop the non-beneficial ID columns, 'EIN' and 'NAME'.
application_df = application_df.drop(["EIN","NAME"], axis = 1)
application_df
# Determine the number of unique values in each column.
application_df.nunique()
# Look at APPLICATION_TYPE value counts for binning
value_count = application_df["APPLICATION_TYPE"].value_counts()
value_count_df = value_count.to_frame()
value_count_df
# Choose a cutoff value and create a list of application types to be replaced
# use the variable name `application_types_to_replace`
application_types_to_replace = value_count_df.loc[value_count_df['APPLICATION_TYPE'] <= 500].index.tolist()
# Replace in dataframe
for app in application_types_to_replace:
application_df['APPLICATION_TYPE'] = application_df['APPLICATION_TYPE'].replace(app,"Other")
# Check to make sure binning was successful
application_df['APPLICATION_TYPE'].value_counts()
# Look at CLASSIFICATION value counts for binning
application_df["CLASSIFICATION"].value_counts()
# You may find it helpful to look at CLASSIFICATION value counts >1
application_df["CLASSIFICATION"].value_counts()[application_df["CLASSIFICATION"].value_counts()>1]
# Choose a cutoff value and create a list of classifications to be replaced
# use the variable name `classifications_to_replace`
classifications_to_replace = application_df["CLASSIFICATION"].value_counts()[application_df["CLASSIFICATION"].value_counts()<1000].index.tolist()
# Replace in dataframe
for cls in classifications_to_replace:
application_df['CLASSIFICATION'] = application_df['CLASSIFICATION'].replace(cls,"Other")
# Check to make sure binning was successful
application_df['CLASSIFICATION'].value_counts()
application_df
# Generate our categorical variable lists
attrition_cat = application_df.dtypes[application_df.dtypes == "object"].index.tolist()
# Check the number of unique values in each column
application_df[attrition_cat].nunique()
# Create a OneHotEncoder instance
enc = OneHotEncoder(sparse=False)
# Fit and transform the OneHotEncoder using the categorical variable list
encode_df = pd.DataFrame(enc.fit_transform(application_df[attrition_cat]))
# Add the encoded variable names to the dataframe
encode_df.columns = enc.get_feature_names(attrition_cat)
encode_df.head()
# Merge one-hot encoded features and drop the originals
application_df = application_df.merge(encode_df,left_index=True, right_index=True)
application_df = application_df.drop(attrition_cat,1)
application_df.head()
# Split our preprocessed data into our features and target arrays
y = application_df['IS_SUCCESSFUL'].values
X = application_df.drop(columns='IS_SUCCESSFUL').values
# Split the preprocessed data into a training and testing dataset
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=78, stratify=y)
# Create a StandardScaler instances
scaler = StandardScaler()
# Fit the StandardScaler
X_scaler = scaler.fit(X_train)
# Scale the data
X_train_scaled = X_scaler.transform(X_train)
X_test_scaled = X_scaler.transform(X_test)
###Output
_____no_output_____
###Markdown
Compile, Train and Evaluate the Model
###Code
# Create a method that creates a new Sequential model with hyperparameter options
def create_model(hp):
nn_model = tf.keras.models.Sequential()
# Allow kerastuner to decide which activation function to use in hidden layers
activation = hp.Choice('activation',['relu','tanh','sigmoid'])
# Allow kerastuner to decide number of neurons in first layer
nn_model.add(tf.keras.layers.Dense(units=hp.Int('first_units',
min_value=1,
max_value=120,
step=2), activation=activation, input_dim=43))
# Allow kerastuner to decide number of hidden layers and neurons in hidden layers
for i in range(hp.Int('num_layers', 1, 15)):
nn_model.add(tf.keras.layers.Dense(units=hp.Int('units_' + str(i),
min_value=1,
max_value=120,
step=2),
activation=activation))
nn_model.add(tf.keras.layers.Dense(units=1, activation="sigmoid"))
# Compile the model
nn_model.compile(loss="binary_crossentropy", optimizer='adam', metrics=["accuracy"])
return nn_model
# Import the kerastuner library
import keras_tuner as kt
tuner = kt.Hyperband(
create_model,
objective="val_accuracy",
max_epochs=50,
hyperband_iterations=2)
# Run the kerastuner search for best hyperparameters
tuner.search(X_train_scaled,y_train,epochs=20,validation_data=(X_test_scaled,y_test))
# Get best model hyperparameters
best_hyper = tuner.get_best_hyperparameters(1)[0]
best_hyper.values
# Evaluate best model against full test data
best_model = tuner.get_best_models(1)[0]
model_loss, model_accuracy = best_model.evaluate(X_test_scaled,y_test,verbose=2)
print(f"Loss: {model_loss}, Accuracy: {model_accuracy}")
# Export our model to HDF5 file
best_model.save('AlphabetSoupCharity.h5')
###Output
_____no_output_____ |
scripts/examples/Make a calcsfh input parameter file.ipynb | ###Markdown
Make CALCSFH input parameter fileThis notebook will go through how to use calcsfh_input_parameter to programatically write calcsfh input parameter files.
###Code
from match.scripts.fileio import calcsfh_input_parameter
###Output
/Users/rosenfield/anaconda/lib/python2.7/site-packages/IPython/kernel/__init__.py:13: ShimWarning: The `IPython.kernel` package has been deprecated. You should import from ipykernel or jupyter_client instead.
"You should import from ipykernel or jupyter_client instead.", ShimWarning)
###Markdown
The default dictionary for calcsfh is accessible via fileio.calcsfh_dict() and is stored in templates/calcsfh_input_parameter.jsonTo use all the default values (it won't actually work when running calcsfh (Notice the CMD limits are all -99,99 and the filters as stupidly called filter1, filter2.):
###Code
print(calcsfh_input_parameter())
###Output
1.35 10.000 10.000 0.050 0.000 0.000 0.050
-2.30 0.10 0.10
0.35 0.000001 0.000001
1
0.10 0.05 5 -99.00 99.00 filter1,filter2
-99.00 99.00 filter1
-99.00 99.00 filter2
0 0
71
6.60 6.65
6.65 6.70
6.70 6.75
6.75 6.80
6.80 6.85
6.85 6.90
6.90 6.95
6.95 7.00
7.00 7.05
7.05 7.10
7.10 7.15
7.15 7.20
7.20 7.25
7.25 7.30
7.30 7.35
7.35 7.40
7.40 7.45
7.45 7.50
7.50 7.55
7.55 7.60
7.60 7.65
7.65 7.70
7.70 7.75
7.75 7.80
7.80 7.85
7.85 7.90
7.90 7.95
7.95 8.00
8.00 8.05
8.05 8.10
8.10 8.15
8.15 8.20
8.20 8.25
8.25 8.30
8.30 8.35
8.35 8.40
8.40 8.45
8.45 8.50
8.50 8.55
8.55 8.60
8.60 8.65
8.65 8.70
8.70 8.75
8.75 8.80
8.80 8.85
8.85 8.90
8.90 8.95
8.95 9.00
9.00 9.05
9.05 9.10
9.10 9.15
9.15 9.20
9.20 9.25
9.25 9.30
9.30 9.35
9.35 9.40
9.40 9.45
9.45 9.50
9.50 9.55
9.55 9.60
9.60 9.65
9.65 9.70
9.70 9.75
9.75 9.80
9.80 9.85
9.85 9.90
9.90 9.95
9.95 10.00
10.00 10.05
10.05 10.10
10.10 10.15
###Markdown
If you will be running calcsfh with -zinc, -kroupa, or -chabrier, the input file format changes (line 2 for zinc, line 1 for IMF). Access the options as arguments.
###Code
print(calcsfh_input_parameter(zinc=True))
print(calcsfh_input_parameter(zinc=True, power_law_imf=False))
###Output
10.000 10.000 0.050 0.000 0.000 0.050
-2.30 0.10 0.10 -2.30 -1.00 -0.10 -1.30
0.35 0.000001 0.000001
1
0.10 0.05 5 -99.00 99.00 filter1,filter2
-99.00 99.00 filter1
-99.00 99.00 filter2
0 0
71
6.60 6.65
6.65 6.70
6.70 6.75
6.75 6.80
6.80 6.85
6.85 6.90
6.90 6.95
6.95 7.00
7.00 7.05
7.05 7.10
7.10 7.15
7.15 7.20
7.20 7.25
7.25 7.30
7.30 7.35
7.35 7.40
7.40 7.45
7.45 7.50
7.50 7.55
7.55 7.60
7.60 7.65
7.65 7.70
7.70 7.75
7.75 7.80
7.80 7.85
7.85 7.90
7.90 7.95
7.95 8.00
8.00 8.05
8.05 8.10
8.10 8.15
8.15 8.20
8.20 8.25
8.25 8.30
8.30 8.35
8.35 8.40
8.40 8.45
8.45 8.50
8.50 8.55
8.55 8.60
8.60 8.65
8.65 8.70
8.70 8.75
8.75 8.80
8.80 8.85
8.85 8.90
8.90 8.95
8.95 9.00
9.00 9.05
9.05 9.10
9.10 9.15
9.15 9.20
9.20 9.25
9.25 9.30
9.30 9.35
9.35 9.40
9.40 9.45
9.45 9.50
9.50 9.55
9.55 9.60
9.60 9.65
9.65 9.70
9.70 9.75
9.75 9.80
9.80 9.85
9.85 9.90
9.90 9.95
9.95 10.00
10.00 10.05
10.05 10.10
10.10 10.15
###Markdown
To adjust the time bins, pass a dictionary as params. * set ntbins, the number of time bins, to calculate the time bin sizes using tmin and tmax.* set tbins, the time bin size, to calculate the number of time bins using tmin and tmax.
###Code
params = {'ntbins': 5}
print(calcsfh_input_parameter(**params))
params = {'tmax': 9.5, 'tmin': 7.5, 'tbin': 0.1}
print(calcsfh_input_parameter(**params))
###Output
1.35 10.000 10.000 0.050 0.000 0.000 0.050
-2.30 0.10 0.10
0.35 0.000001 0.000001
1
0.10 0.05 5 -99.00 99.00 filter1,filter2
-99.00 99.00 filter1
-99.00 99.00 filter2
0 0
20
7.50 7.60
7.60 7.70
7.70 7.80
7.80 7.90
7.90 8.00
8.00 8.10
8.10 8.20
8.20 8.30
8.30 8.40
8.40 8.50
8.50 8.60
8.60 8.70
8.70 8.80
8.80 8.90
8.90 9.00
9.00 9.10
9.10 9.20
9.20 9.30
9.30 9.40
9.40 9.50
###Markdown
Set the CMD limits using the same nomenclature as found in the MATCH README file. You could also add a background file.
###Code
params = {'tmax': 9.5, 'tmin': 7.5, 'tbin': 0.1,
'vmin': 16, 'vmax': 24, 'imin': 18, 'imax': 27, 'v-imin': -0.5, 'v-imax': 2.5,
'v': 'F555W', 'i': 'F814W', 'bg_file': 'bg.dat'}
print(calcsfh_input_parameter(**params))
###Output
1.35 10.000 10.000 0.050 0.000 0.000 0.050
-2.30 0.10 0.10
0.35 0.000001 0.000001
1
0.10 0.05 5 -0.50 2.50 F555W,F814W
16.00 24.00 F555W
18.00 24.00 F814W
0 0
20
7.50 7.60
7.60 7.70
7.70 7.80
7.80 7.90
7.90 8.00
8.00 8.10
8.10 8.20
8.20 8.30
8.30 8.40
8.40 8.50
8.50 8.60
8.60 8.70
8.70 8.80
8.80 8.90
8.90 9.00
9.00 9.10
9.10 9.20
9.20 9.30
9.30 9.40
9.40 9.50
-1 1 -1bg.dat
###Markdown
To use this in your own script, do something like:
###Code
with open('match.param', 'w') as outputfile:
outputfile.write(calcsfh_input_parameter(**params))
! cat match.param
###Output
1.35 10.000 10.000 0.050 0.000 0.000 0.050
-2.30 0.10 0.10
0.35 0.000001 0.000001
1
0.10 0.05 5 -0.50 2.50 F555W,F814W
16.00 24.00 F555W
18.00 24.00 F814W
0 0
20
7.50 7.60
7.60 7.70
7.70 7.80
7.80 7.90
7.90 8.00
8.00 8.10
8.10 8.20
8.20 8.30
8.30 8.40
8.40 8.50
8.50 8.60
8.60 8.70
8.70 8.80
8.80 8.90
8.90 9.00
9.00 9.10
9.10 9.20
9.20 9.30
9.30 9.40
9.40 9.50
-1 1 -1bg.dat
###Markdown
Using different values of tbinSet tbreak to be the value where a different tbin value should be used. tbin should be an array lenth tbreak + 1 * Have 6.6-9.0 at dt=0.1 and 9.0-10.15 at dt=0.05
###Code
params['tmin'] = 6.6
params['tmax'] = 10.15
params['tbreak'] = [9.0]
params['tbin'] = [0.1, 0.05]
print(calcsfh_input_parameter(**params))
###Output
1.35 10.000 10.000 0.050 0.000 0.000 0.050
-2.30 0.10 0.10
0.35 0.000001 0.000001
1
0.10 0.05 5 -0.50 2.50 F555W,F814W
16.00 24.00 F555W
18.00 24.00 F814W
0 0
49
6.60 6.70
6.70 6.80
6.80 6.90
6.90 7.00
7.00 7.10
7.10 7.20
7.20 7.30
7.30 7.40
7.40 7.50
7.50 7.60
7.60 7.70
7.70 7.80
7.80 7.90
7.90 8.00
8.00 8.10
8.10 8.20
8.20 8.30
8.30 8.40
8.40 8.50
8.50 8.60
8.60 8.70
8.70 8.80
8.80 8.90
8.90 9.00
9.00 9.05
9.05 9.10
9.10 9.15
9.15 9.20
9.20 9.25
9.25 9.30
9.30 9.35
9.35 9.40
9.40 9.45
9.45 9.50
9.50 9.55
9.55 9.60
9.60 9.65
9.65 9.70
9.70 9.75
9.75 9.80
9.80 9.85
9.85 9.90
9.90 9.95
9.95 10.00
10.00 10.05
10.05 10.10
10.10 10.15
10.15 10.20
-1 1 -1bg.dat
###Markdown
* Have 6.6-7.0 at dt=0.1 and 8.0-9.0 at dt=0.05 and 9.0-10.0 at dt=0.05
###Code
params['tmin'] = 7.0
params['tmax'] = 10.0
params['tbreak'] = [8.0, 9.0]
params['tbin'] = [0.1, 0.05, 0.02]
print(calcsfh_input_parameter(**params))
calcsfh_input_parameter?
from match.scripts.fileio import calcsfh_dict
calcsfh_dict().keys()
###Output
_____no_output_____ |
Daniel.ipynb | ###Markdown
###Code
# EDA ANALYSIS
# research problem
# Figure out how we can predict which individuals are most likely to have or use a bank account.
# Your solution will help provide an indication of the state of financial inclusion in Kenya, Rwanda, Tanzania, and Uganda,
# while providing insights into some of the key demographic factors that might drive individuals’ financial outcomes.
# Import libraries
import numpy as np
import pandas as pd
from matplotlib import pyplot as plt
import seaborn as sns
%matplotlib inline
# Import dataset
# Load the firts 5 records
# shape of the data
print(fd.shape)
fd = pd.read_csv('/content/Financial Dataset - 1.csv')
fd.head(5)
# Renaming columns
fd.columns= fd.columns.str.replace(" ","_",)
fd.columns = map(str.lower,fd.columns)
fd
# Missing data
fd.isnull().sum()
# fd
# since we interested in data that has bank account, the null values can be dropped
fd1 = fd.dropna(subset=['has_a_bank_account'],axis=0,how='all')
fd1.isnull().sum()
# # perform interpolation for the household and age columns
# fd.iloc[0:6, 6:8].head(10)
# fd.interpolate().iloc[0:6, 6:8]
# fd.isnull().sum()
# use forward ffill for the other categorical columns
fd2['level_of_educuation'] = fd2['level_of_educuation'].replace(6,'NaN')
# fd1 = fd.fillna(method='ffill')
fd1.isnull().sum()
# Overview of the data
fd1.describe()
# we can see the household size mean is 3 and age is 38 years
# Std for household being 2 and age being 16
# One can tell that the sample population size has an average age of 38 which covers almost 50% of the data
# number of unique elements in the dataset
print(fd1.nunique())
# We see that the year has 6 unique elements instead of 4 thus showing some anomalies
# Drop years not within 2016 -2018
value_list = ['2016', '2017', '2018']
fd2 = fd1[fd1.year.isin(value_list)]
print(fd1.shape)
print(fd2.shape)
# fd2.drop_duplicates(['uniqueid'], keep='first')
# data["Number_of_households"] < 50,000
fd_kenya = fd2['country'] == 'Kenya'
print(fd2[fd_kenya].shape)
fd_uganda = fd2['country'] == 'Uganda'
print(fd2[fd_uganda].shape)
fd_tz = fd2['country'] == 'Tanzania'
print(fd2[fd_tz].shape)
fd_rw = fd2['country'] == 'Rwanda'
print(fd2[fd_rw].shape)
fd2[fd_rw].duplicated(subset=['uniqueid']).sum()
# fd2[fd_kenya].drop_duplicates(['uniqueid'], keep=False)
# print(fd2.shape)
# fd2[fd_uganda].drop_duplicates(['uniqueid'], keep=False, inplace=True)
# print(fd2.shape)
# fd2[fd_tz].drop_duplicates(['uniqueid'], keep='last', inplace=True)
# print(fd2.shape)
# fd2[fd_rw].drop_duplicates(['uniqueid'], inplace=True)
# print(fd2.shape)
fd2.info()
# Outliers
# Univariate analysis
fd2['country'].value_counts().plot.bar(title='Freq dist of accounts per country ')
fd2['the_relathip_with_head'].value_counts().plot.bar(title='Freq dist of accounts per rshp household ')
fd2['marital_status'].value_counts().plot.bar(title='Freq dist of accounts on marital status')
fd2['level_of_educuation'].value_counts().plot.bar(title='Freq dist of accounts on level of education')
sns.boxplot(y=fd2['respondent_age'])
sns.boxplot(y=fd2['household_size'])
# coorelation matrix
f, ax = plt.subplots(figsize=(10, 8))
corr = fd2.corr()
sns.heatmap(corr,
xticklabels=corr.columns.values,
yticklabels=corr.columns.values)
fd2
# Frequency table for accounts
fd2.has_a_bank_account.value_counts()
fd2.country.value_counts()
plt.hist(fd2['has_a_bank_account'], bin = 10, histtype ='bar',rwidth = 0.9)
###Output
_____no_output_____ |
math-data-cleanup.ipynb | ###Markdown
Load in data saved from scraping notebook - At this point we have a csv with the text and other various information from each video lecture. - The data still isn't labeled
###Code
lectures1 = pd.read_csv('math2019.csv')
lectures2 = pd.read_csv('math20192.csv')
lectures = pd.concat([lectures1.reset_index(drop=True),lectures2.reset_index(drop=True)], axis=0)
lectures.playlist_id.unique()
###Output
_____no_output_____
###Markdown
add labels to data - we are going to use our list of playlist Ids and match them with their subject, then create a new target column based of playlist id
###Code
#load playlist csv
playlist_ids = pd.read_csv('playlists_math.csv')
playlist_ids.iloc[17].PlaylistID = 'PLUl4u3cNGP61hsJNdULdudlRL493b-XZf'
playlist_ids
#we need some consolidation in terms of target subjects
#we will create a dictionary with the playlist as a key and the subject as the value
subject_keys = playlist_ids.PlaylistID
subject_values = ['Probability','Statistics','CS','Algorithms','AI','Calculus','Calculus','Linear Algebra','Diff. Eq.',
'Linear Algebra','CS','Probability','CS','Algorithms','Robotics','Math for Eng.','Statistics',
'Data Structures','Probability','NLP','CS','Statistics','Algebraic Geometry','Calculus','Calculus',
'Calculus','AI','Various']
subject_lookup = {i:j for i,j in zip(subject_keys,subject_values)}
#function to label a subject for a given videos PL id
subject_re = re.compile('(%s)' % '|'.join(subject_lookup.keys()))
def label_subjects(s, subject_lookup=subject_lookup):
def replace(match):
return subject_lookup[match.group(0)]
return subject_re.sub(replace, s)
lectures['Subject'] = [ label_subjects(i) for i in lectures.playlist_id]
lectures.head()
lectures[lectures.playlist_id == 'PL8_xPU5epJddl1dmAZWlERA0zplgD0W4E']
import matplotlib.pyplot as plt
import seaborn as sns
subject_counts = lectures.Subject.value_counts().reset_index()
len(subject_counts)
sns.barplot(x='Subject', y='index', data=subject_counts, palette='mako')
chan_cnt = lectures.channelid.value_counts().reset_index()
sns.barplot(x='channelid',y='index',data=chan_cnt, palette='mako')
lectures.isnull().sum()
#correct description for harvard CS50 lectures
lectures['description'] = np.where(pd.isnull(lectures.description) == True,'HAR_CS50',lectures.description)
lectures
###Output
_____no_output_____
###Markdown
Cleaning, tokenizing the text - The text is pretty messy - we need to clean it up a bit. we do a clean up for the doc2vec model and another for the tfidf model This intro is tricky as the '\n's are not always in the same spot, so we can't use regex. My preference is to remove the first 300 characters of each lecture. Not all of the lectures start with a long intro like the MIT lectures, however this approach should not affect the integry of any one lecture's content.
###Code
lectures.head(2)
#for the doc2vec model, we wont remove the stop words
def make_d2v_data(lectures):
clean_lectures = []
#iterate over the text by lecture
for lecture in lectures:
#skip intro
lecture = lecture[295:]
#tokenize punctuation
for key, token in punt_dict.items():
lecture = lecture.replace(key, ' {} '.format(token))
#expand contractions
for key, expan in contract_dict.items():
lecture = lecture.replace(key, ' {} '.format(expan))
#append clean lecture to list of lectures
clean_lectures.append(lecture)
return clean_lectures
orig_text = pd.read_csv('all_lectures.csv')
d2v_df = pd.read_csv('all_lecture_text.csv')
new_df = pd.concat([orig_text.reset_index(drop=True),d2v_df],axis=1)
new_df.isnull().sum().sum()
new = pd.DataFrame()
new['text'] = orig_text.lecture_text
new['label'] = d2v_df.Subject
new.dropna(inplace=True)
new.head()
new.isnull().sum()
new.shape
new.to_csv('raw_text.csv',index=False)
###Output
_____no_output_____ |
Modeling/ModelingExamples.ipynb | ###Markdown
The Stingray Modeling API ExplainedSome more in-depth explanations of how the Stingray modeling API works.Who should be using this API?Basically, anyone who wants to model power spectral products with parametric functions. The purpose of this API is two-fold:(1) provide convenient methods and classes in order to model a large range of typical data representations implemented in Stingray(2) provide a more general framework for users to build their own modelsA note on terminology: in this tutorial, we largely use _model_ to denote both the parametric model describing the underlying process that generated the data, and the statistical model used to account for uncertainties in the measurement process. The `modeling` subpackage defines a wider range of classes for typical statistical models than most standard modelling packages in X-ray astronomy, including likelihoods for Gaussian-distributed uncertainties (what astronomers call the $\chi^2$ likelihood), Poisson-distributed data (e.g. light curves) and $\chi^2$-distributed data (confusingly, *not* what astronomers call the $\chi^2$ likelihood, but the likelihood of data with $\chi^2$-distributed uncertainties appropriate for power spectra). It also defines a superclass `LogLikelihood` that make extending the framework to other types of data uncertainties straightforward. It supports Bayesian modelling via the `Posterior` class and its subclasses (for different types of data, equivalent to the likelihood classes) and provides support for defining priors. The class `ParameterEstimation` and its data type-specific subclasses implement a range of operations usually done with power spectra and other products, including optimization (fitting), sampling (via Markov-Chain Monte Carlo), calibrating models comparison metrics (particularly likelihood ratio tests) and outlier statistics (for finding periodic signal candidates).Overall, it is designed to be as modular as possible and extensible to new data types and problems in many places, though we do explicitly *not* aim to provide a fully general modelling framework (for example, at the moment, we have given no thought to modeling multi-variate data, though this may change in the future). Some backgroundModeling power spectra and light curves with parametric models is a fairly standard task. Stingray aims to make solving these problems as easy as possible. We aim to integrate our existing code with `astropy.modeling` for for maximum compatibility. Please note, however, that we are only using the models, not the fitting interface, which is too constrained for our purposes.
###Code
%load_ext autoreload
%autoreload 2
# ignore warnings to make notebook easier to see online
# COMMENT OUT THESE LINES FOR ACTUAL ANALYSIS
import warnings
warnings.filterwarnings("ignore")
%matplotlib inline
import matplotlib.pyplot as plt
try:
import seaborn as sns
sns.set_palette("colorblind")
except ImportError:
print("Install seaborn. It help you make prettier figures!")
import numpy as np
from astropy.modeling import models
###Output
_____no_output_____
###Markdown
The models and API of `astropy.modeling.models` is explained in the [astropy documentation](http://docs.astropy.org/en/stable/modeling/) in more detail.Here's how you instantiate a simple 1-D Gaussian:
###Code
g = models.Gaussian1D()
# Generate fake data
np.random.seed(0)
x = np.linspace(-5., 5., 200)
y = 3 * np.exp(-0.5 * (x - 1.3)**2 / 0.8**2)
y += np.random.normal(0., 0.2, x.shape)
yerr = 0.2
plt.figure(figsize=(8,5))
plt.errorbar(x, y, yerr=yerr, fmt='ko')
###Output
_____no_output_____
###Markdown
Likelihoods and PosteriorsIn general, model fitting will happen either in a frequentist (Maximum Likelihood) or Bayesian framework. Stingray's strategy is to let the user define a posterior in both cases, but ignore the prior in the former case. Let's first make some fake data:
###Code
# define power law component
pl = models.PowerLaw1D()
# fix x_0 of power law component
pl.x_0.fixed = True
# define constant
c = models.Const1D()
# make compound model
plc = pl + c
###Output
_____no_output_____
###Markdown
We're going to pick some fairly standard parameters for our data:
###Code
# parameters for fake data.
alpha = 2.0
amplitude = 5.0
white_noise = 2.0
###Output
_____no_output_____
###Markdown
And now a frequency array:
###Code
freq = np.linspace(0.01, 10.0, int(10.0/0.01))
###Output
_____no_output_____
###Markdown
Now we can set the parameters in the model:
###Code
from astropy.modeling.fitting import _fitter_to_model_params
_fitter_to_model_params(plc, [amplitude, alpha, white_noise])
psd_shape = plc(freq)
###Output
_____no_output_____
###Markdown
As a last step, we need to add noise by picking from a chi-square distribution with 2 degrees of freedom:
###Code
powers = psd_shape*np.random.chisquare(2, size=psd_shape.shape[0])/2.0
###Output
_____no_output_____
###Markdown
Let's plot the result:
###Code
plt.figure(figsize=(12,7))
plt.loglog(freq, powers, ds="steps-mid", label="periodogram realization")
plt.loglog(freq, psd_shape, label="power spectrum")
plt.legend()
###Output
_____no_output_____
###Markdown
Maximum Likelihood FittingLet's assume we've observed this periodogram from our source. We would now like to estimate the parameters. This requires the definition of *likelihood*, which describes the probability of observing the data plotted above given some underlying model with a specific set of parameters. To say it differently, the likelihood encodes what we know about the underlying model (here a power law and a constant) and the statistical properties of the data (power spectra generally follow a chi-square distribution) and then allows us to compare data and model for various parameters under the assumption of the statistical uncertainties.In order to find the best parameter set, one generally maximizes the likelihood function using an optimization algorithm. Because optimization algorithms generally *minimize* functions, they effectively minimize the log-likelihood, which comes out to be the same as maximizing the likelihood itself.Below is an implementation of the $\chi^2$ likelihood as appropriate for power spectral analysis, with comments for easier understanding. The same is also implemented in `posterior.py` in Stingray:
###Code
logmin = -1e16
class PSDLogLikelihood(object):
def __init__(self, freq, power, model, m=1):
"""
A Chi-square likelihood as appropriate for power spectral analysis.
Parameters
----------
freq : iterable
x-coordinate of the data
power : iterable
y-coordinte of the data
model: an Astropy Model instance
The model to use in the likelihood.
m : int
1/2 of the degrees of freedom, i.e. the number of powers
that were averaged to obtain the power spectrum input into
this routine.
"""
self.x = ps.freq # the x-coordinate of the data (frequency array)
self.y = ps.power # the y-coordinate of the data (powers)
self.model = model # an astropy.models instance
self.m = m
self.params = [k for k,l in self.model.fixed.items() if not l]
self.npar = len(self.params) # number of free parameters
def evaluate(self, pars, neg=False):
"""
Evaluate the log-likelihood.
Parameters
----------
pars : iterable
The list of parameters for which to evaluate the model.
neg : bool, default False
If True, compute the *negative* log-likelihood, otherwise
compute the *positive* log-likelihood.
Returns
-------
loglike : float
The log-likelihood of the model
"""
# raise an error if the length of the parameter array input into
# this method doesn't match the number of free parameters in the model
if np.size(pars) != self.npar:
raise Exception("Input parameters must" +
" match model parameters!")
# set parameters in self.model to the parameter set to be used for
# evaluation
_fitter_to_model_params(self.model, pars)
# compute the values of the model at the positions self.x
mean_model = self.model(self.x)
# if the power spectrum isn't averaged, compute simple exponential
# likelihood (chi-square likelihood for 2 degrees of freedom)
if self.m == 1:
loglike = -np.sum(np.log(mean_model)) - \
np.sum(self.y/mean_model)
# otherwise use chi-square distribution to compute likelihood
else:
loglike = -2.0*self.m*(np.sum(np.log(mean_model)) +
np.sum(self.y/mean_model) +
np.sum((2.0 / (2. * self.m) - 1.0) *
np.log(self.y)))
if not np.isfinite(loglike):
loglike = logmin
if neg:
return -loglike
else:
return loglike
def __call__(self, parameters, neg=False):
return self.evaluate(parameters, neg)
###Output
_____no_output_____
###Markdown
Let's make an object and see what it calculates if we put in different parameter sets. First, we have to make our sample PSD into an actual `Powerspectrum` object:
###Code
from stingray import Powerspectrum
ps = Powerspectrum()
ps.freq = freq
ps.power = powers
ps.df = ps.freq[1] - ps.freq[0]
ps.m = 1
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
test_pars = [1, 5, 100]
loglike(test_pars)
test_pars = [4.0, 10, 2.5]
loglike(test_pars)
test_pars = [2.0, 5.0, 2.0]
loglike(test_pars)
###Output
_____no_output_____
###Markdown
Something close to the parameters we put in should yield the largest log-likelihood. Feel free to play around with the test parameters to verify that this is true.You can similarly import the `PSDLogLikelihood` class from `stingray.modeling` and do the same:
###Code
from stingray.modeling import PSDLogLikelihood
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
loglike(test_pars)
###Output
_____no_output_____
###Markdown
To estimate the parameters, we can use an optimization routine, such as those implemented in `scipy.optimize.minimize`.We have wrapped some code around that, to make your lives easier. We will not reproduce the full code here, just demonstrate its functionality.Now we can instantiate the `PSDParEst` (for PSD Parameter Estimation) object. This can do more than simply optimize a single model, but we'll get to that later.The `PSDParEst` object allows one to specify the fit method to use (however, this must be one of the optimizers in `scipy.optimize`). The parameter `max_post` allows for doing maximum-a-posteriori fits on the Bayesian posterior rather than maximum likelihood fits (see below for more details). We'll set it to `False` for now, since we haven't defined any priors:
###Code
from stingray.modeling import PSDParEst
parest = PSDParEst(ps, fitmethod="L-BFGS-B", max_post=False)
###Output
_____no_output_____
###Markdown
In order to fit a model, make an instance of the appropriate `LogLikelihood` or `Posterior` subclass, andsimply call the `fit` method with that instance and starting parameters you would like to fit.
###Code
loglike = PSDLogLikelihood(ps.freq, ps.power, plc, m=ps.m)
loglike.model.parameters
loglike.npar
starting_pars = [3.0, 1.0, 2.4]
res = parest.fit(loglike, starting_pars)
###Output
_____no_output_____
###Markdown
The result is an `OptimizationResults` object, which computes various summaries and useful quantities.For example, here's the value of the likelihood function at the maximum the optimizer found:
###Code
res.result
###Output
_____no_output_____
###Markdown
**Note**: Optimizers routinely get stuck in *local* minima (corresponding to local maxima of the likelihood function). It is usually useful to run an optimizer several times with different starting parameters in order to get close to the global maximum.Most useful are the estimates of the parameters at the maximum likelihood and their uncertainties:
###Code
print(res.p_opt)
print(res.err)
###Output
[4.72916493 2.09193061 2.10372265]
[3.78311696 0.7300253 0.55312843]
###Markdown
**Note**: uncertainties are estimated here via the covariance matrix between parameters, i.e. the inverse of the Hessian at the maximum. This only represents the true uncertainties for specific assumptions about the likelihood function (Gaussianity), so use with care!It also computes Akaike Information Criterion (AIC) and the Bayesian Information Criterion (BIC) for model comparison purposes:
###Code
print("AIC: " + str(res.aic))
print("BIC: " + str(res.bic))
###Output
AIC: 2189.789677035487
BIC: 2204.512942872433
###Markdown
Finally, it also produces the values of the mean function for the parameters at the maximum. Let's plot that and compare with the power spectrum we put in:
###Code
plt.figure(figsize=(12,8))
plt.loglog(ps.freq, psd_shape, label="true power spectrum",lw=3)
plt.loglog(ps.freq, ps.power, label="simulated data")
plt.loglog(ps.freq, res.mfit, label="best fit", lw=3)
plt.legend()
###Output
_____no_output_____
###Markdown
That looks pretty good!You can print a summary of the fitting results by calling `print_summary`:
###Code
res.print_summary(loglike)
###Output
The best-fit model parameters plus errors are:
0) Parameter amplitude_0 :
4.72916 +/- 3.78312
[ None None]
1) Parameter x_0_0 :
1.00000 (Fixed)
2) Parameter alpha_0 :
2.09193 +/- 0.73003
[ None None]
3) Parameter amplitude_1 :
2.10372 +/- 0.55313
[ None None]
Fitting statistics:
-- number of data points: 1000
-- Deviance [-2 log L] D = 4367.579354.3
-- The Akaike Information Criterion of the model is: 2189.789677035487.
-- The Bayesian Information Criterion of the model is: 2204.512942872433.
-- The figure-of-merit function for this model is: 1079.682849.5f and the fit for 997 dof is 1.082932.3f
-- Summed Residuals S = 69267.121618.5f
-- Expected S ~ 6000.000000.5 +/- 109.544512.5
###Markdown
Likelihood RatiosThe parameter estimation code has more functionality than act as a simple wrapper around `scipy.optimize`. For example, it allows for easy computation of likelihood ratios. Likelihood ratios are a standard way to perform comparisons between two models (though they are not always statistically meaningful, and should be used with caution!).To demonstrate that, let's make a broken power law model
###Code
# broken power law model
bpl = models.BrokenPowerLaw1D()
# add constant
bplc = bpl + c
bplc.param_names
# define starting parameters
bplc_start_pars = [2.0, 1.0, 3.0, 1.0, 2.5]
loglike_bplc = PSDLogLikelihood(ps.freq, ps.power, bplc, m=ps.m)
pval, plc_opt, bplc_opt = parest.compute_lrt(loglike, starting_pars, loglike_bplc, bplc_start_pars)
print("Likelihood Ratio: " + str(pval))
###Output
Likelihood Ratio: 2.2374827070098036
###Markdown
Bayesian Parameter EstimationFor Bayesian parameter estimation, we require a prior along with the likelihood defined above. Together, they form the *posterior*, the probability of the parameters given the data, which is what we generally want to compute in science.Since there are no universally accepted priors for a model (they depend on the problem at hand and your physical knowledge about the system), they cannot be easily hard-coded in stingray. Consequently, setting priors is slightly more complex. Analogously to the `LogLikelihood` above, we can also define a `Posterior` object. Each posterior object has three methods: `logprior`, `loglikelihood` and `logposterior`. We have pre-defined some `Posterior` objects in `posterior.py` for common problems, including power spectral analysis. We start by making a `PSDPosterior` object:
###Code
from stingray.modeling import PSDPosterior
lpost = PSDPosterior(ps.freq, ps.power, plc, m=ps.m)
###Output
_____no_output_____
###Markdown
The priors are set as a dictionary of functions:
###Code
import scipy.stats
# flat prior for the power law index
p_alpha = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
# flat prior for the power law amplitude
p_amplitude = lambda amplitude: ((0.01 <= amplitude) & (amplitude <= 10.0))
# normal prior for the white noise parameter
p_whitenoise = lambda white_noise: scipy.stats.norm(2.0, 0.1).pdf(white_noise)
priors = {}
priors["alpha_0"] = p_alpha
priors["amplitude_0"] = p_amplitude
priors["amplitude_1"] = p_whitenoise
###Output
_____no_output_____
###Markdown
There's a function `set_logprior` in `stingray.modeling` that sets the prior correctly:
###Code
from stingray.modeling import set_logprior
lpost.logprior = set_logprior(lpost, priors)
###Output
_____no_output_____
###Markdown
You can also set the priors when you instantiate the posterior object:
###Code
lpost = PSDPosterior(ps.freq, ps.power, plc, priors=priors, m=ps.m)
###Output
_____no_output_____
###Markdown
Much like before with the log-likelihood, we can now also compute the log-posterior for various test parameter sets:
###Code
test_pars = [1.0, 2.0, 4.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
###Output
log-prior: -198.61635344021062
log-likelihood: -2412.2493594640564
log-posterior: -2610.865712904267
###Markdown
When the prior is zero (so the log-prior is -infinity), it automatically gets set to a very small value in order to avoid problems when doing the optimization:
###Code
test_pars = [6, 6, 3.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
test_pars = [5.0, 2.0, 2.0]
print("log-prior: " + str(lpost.logprior(test_pars)))
print("log-likelihood: " + str(lpost.loglikelihood(test_pars)))
print("log-posterior: " + str(lpost(test_pars)))
###Output
log-prior: 1.383646559789373
log-likelihood: -2184.6739536386162
log-posterior: -2183.290307078827
###Markdown
We can do the same parameter estimation as above, except now it's called maximum-a-posteriori instead of maximum likelihood and includes the prior (notice we set `max_post=True`):
###Code
parest = PSDParEst(ps, fitmethod='BFGS', max_post=True)
res = parest.fit(lpost, starting_pars)
print("best-fit parameters:")
for p,e in zip(res.p_opt, res.err):
print("%.4f +/- %.4f"%(p,e))
###Output
best-fit parameters:
4.8949 +/- 0.0762
2.0690 +/- 0.0636
2.0547 +/- 0.0149
###Markdown
The same outputs exist as for the Maximum Likelihood case:
###Code
res.print_summary(lpost)
###Output
The best-fit model parameters plus errors are:
0) Parameter amplitude_0 :
4.89491 +/- 0.07623
[ None None]
1) Parameter x_0_0 :
1.00000 (Fixed)
2) Parameter alpha_0 :
2.06898 +/- 0.06363
[ None None]
3) Parameter amplitude_1 :
2.05471 +/- 0.01489
[ None None]
Fitting statistics:
-- number of data points: 1000
-- Deviance [-2 log L] D = 4367.845867.3
-- The Akaike Information Criterion of the model is: 2188.688941098666.
-- The Bayesian Information Criterion of the model is: 2203.412206935612.
-- The figure-of-merit function for this model is: 1104.686605.5f and the fit for 997 dof is 1.108011.3f
-- Summed Residuals S = 75870.935552.5f
-- Expected S ~ 6000.000000.5 +/- 109.544512.5
###Markdown
Unlike in the maximum likelihood case, we can also *sample* from the posterior probability distribution. The method `sample` uses the [emcee](http://dfm.io/emcee/current/) package to do MCMC. **Important**: Do *not* sample from the likelihood function. This is formally incorrect and can lead to incorrect inferences about the problem, because there is no guarantee that a posterior with improper (flat, infinite) priors will be bounded!**Important**: emcee has had a major upgrade to version 3, which came with a number of API changes. To ensure compatibility with stingray, please update emcee to the latest version, if you haven't already.Much like the optimizer, the sampling method requires a model and a set of starting parameters `t0`. Optionally, it can be useful to also input a covariance matrix, for example from the output of the optimizer.Finally, the user should specify the number of walkers as well as the number of steps to use for both burn-in and sampling:
###Code
sample = parest.sample(lpost, res.p_opt, cov=res.cov, nwalkers=400,
niter=100, burnin=300, namestr="psd_modeling_test")
###Output
Chains too short to compute autocorrelation lengths.
-- The acceptance fraction is: 0.640200.5
R_hat for the parameters is: [0.33858822 0.00779588 0.00477259]
-- Posterior Summary of Parameters:
parameter mean sd 5% 95%
---------------------------------------------
theta[0] 4.92699673203164 0.5826084748010877 4.001167475075788 5.916405947428704
theta[1] 2.0850162824299567 0.08840420643721274 1.945198565812 2.236054242762929
theta[2] 2.059927524015745 0.06916995745141118 1.944976347964247 2.172179088048585
###Markdown
The sampling method returns an object with various attributes that are useful for further analysis, for example the acceptance fraction:
###Code
sample.acceptance
###Output
_____no_output_____
###Markdown
Or the mean and confidence intervals of the parameters:
###Code
sample.mean
sample.ci
###Output
_____no_output_____
###Markdown
The method `print_results` prints the results:
###Code
sample.print_results()
###Output
-- The acceptance fraction is: 0.640200.5
R_hat for the parameters is: [0.33858822 0.00779588 0.00477259]
-- Posterior Summary of Parameters:
parameter mean sd 5% 95%
---------------------------------------------
theta[0] 4.92699673203164 0.5826084748010877 4.001167475075788 5.916405947428704
theta[1] 2.0850162824299567 0.08840420643721274 1.945198565812 2.236054242762929
theta[2] 2.059927524015745 0.06916995745141118 1.944976347964247 2.172179088048585
###Markdown
Similarly, the method `plot_results` produces a bunch of plots:
###Code
fig = sample.plot_results(nsamples=1000, fig=None, save_plot=True,
filename="modeling_tutorial_mcmc_corner.pdf")
###Output
_____no_output_____
###Markdown
Calibrating Likelihood Ratio TestsIn order to use likelihood ratio tests for model comparison, one must compute the p-value of obtaining a likelihood ratio at least as high as that observed given that the null hypothesis (the simpler model) is true. The distribution of likelihood ratios under that assumption will only follow an analytical distribution if* the models are nested, i.e. the simpler model is a special case of the more complex model *and** the parameter values that transform the complex model into the simple one do not lie on the boundary of parameter space. Imagine e.g. a simple model without a QPO, and a complex model with a QPO, where in order to make the simpler model out of the more complex one you would set the QPO amplitude to zero. However, the amplitude cannot go below zero, thus the critical parameter value transforming the complex into the simple model lie on the boundary of parameter space.If these two conditions are not given, the observed likelihood ratio must be calibrated via simulations of the simpler model. In general, one should *not* simulate from the best-fit model alone: this ignores the uncertainty in the model parameters, and thus may artificially inflate the significance of the result.In the purely frequentist (maximum likelihood case), one does not know the shape of the probability distribution for the parameters. A rough approximation can be obtained by assuming the likelihood surface to be a multi-variate Gaussian, with covariances given by the inverse Fisher information. One may sample from that distribution and then simulate fake data sets using the sampled parameters. Each simulated data set will be fit with both models to compute a likelihood ratio, which is then used to build a distribution of likelihood ratios from the simpler model to compare the observed likelihood ratio to.In the Bayesian case, one may sample from the posterior for the parameters directly and then use these samples as above to create fake data sets in order to derive a posterior probability distribution for the likelihood ratios and thus a posterior predictive p-value.For the statistical background of much of this, see [Protassov et al, 2002](http://adsabs.harvard.edu/abs/2002ApJ...571..545P).Below, we set up code that will do exactly that, for both the frequentist and Bayesian case.
###Code
import copy
def _generate_model(lpost, pars):
"""
Helper function that generates a fake PSD similar to the
one in the data, but with different parameters.
Parameters
----------
lpost : instance of a Posterior or LogLikelihood subclass
The object containing the relevant information about the
data and the model
pars : iterable
A list of parameters to be passed to lpost.model in oder
to generate a model data set.
Returns:
--------
model_data : numpy.ndarray
An array of model values for each bin in lpost.x
"""
# get the model
m = lpost.model
# reset the parameters
_fitter_to_model_params(m, pars)
# make a model spectrum
model_data = lpost.model(lpost.x)
return model_data
def _generate_psd(ps, lpost, pars):
"""
Generate a fake power spectrum from a model.
Parameters:
----------
lpost : instance of a Posterior or LogLikelihood subclass
The object containing the relevant information about the
data and the model
pars : iterable
A list of parameters to be passed to lpost.model in oder
to generate a model data set.
Returns:
--------
sim_ps : stingray.Powerspectrum object
The simulated Powerspectrum object
"""
model_spectrum = _generate_model(lpost, pars)
# use chi-square distribution to get fake data
model_powers = model_spectrum*np.random.chisquare(2*ps.m,
size=model_spectrum.shape[0])/(2.*ps.m)
sim_ps = copy.copy(ps)
sim_ps.powers = model_powers
return sim_ps
def _compute_pvalue(obs_val, sim):
"""
Compute the p-value given an observed value of a test statistic
and some simulations of that same test statistic.
Parameters
----------
obs_value : float
The observed value of the test statistic in question
sim: iterable
A list or array of simulated values for the test statistic
Returns
-------
pval : float [0, 1]
The p-value for the test statistic given the simulations.
"""
# cast the simulations as a numpy array
sim = np.array(sim)
# find all simulations that are larger than
# the observed value
ntail = sim[sim > obs_val].shape[0]
# divide by the total number of simulations
pval = ntail/sim.shape[0]
return pval
def calibrate_lrt(ps, lpost1, t1, lpost2, t2, sample=None, neg=True, max_post=False,
nsim=1000, niter=200, nwalker=500, burnin=200, namestr="test"):
# set up the ParameterEstimation object
parest = PSDParEst(ps, fitmethod="L-BFGS-B", max_post=False)
# compute the observed likelihood ratio
lrt_obs, res1, res2 = parest.compute_lrt(lpost1, t1,
lpost2, t2,
neg=neg,
max_post=max_post)
# simulate parameter sets from the simpler model
if not max_post:
# using Maximum Likelihood, so I'm going to simulate parameters
# from a multivariate Gaussian
# set up the distribution
mvn = scipy.stats.multivariate_normal(mean=res1.p_opt, cov=res1.cov)
# sample parameters
s_all = mvn.rvs(size=nsim)
else:
if sample is None:
# sample the posterior using MCMC
sample = parest.sample(lpost, res1.p_opt, cov=res1.cov,
nwalkers=nwalker, niter=niter,
burnin=burnin, namestr=namestr)
# pick nsim samples out of the posterior sample
s_all = sample[np.random.choice(sample.shape[0], nsim, replace=False)]
lrt_sim = np.zeros(nsim)
# now I can loop over all simulated parameter sets to generate a PSD
for i,s in enumerate(s_all):
# generate fake PSD
sim_ps = _generate_psd(ps, lpost1, s)
# make LogLikelihood objects for both:
if not max_post:
sim_lpost1 = PSDLogLikelihood(sim_ps.freq, sim_ps.power,
model=lpost1.model, m=sim_ps.m)
sim_lpost2 = PSDLogLikelihood(sim_ps.freq, sim_ps.power,
model=lpost2.model, m=sim_ps.m)
else:
# make a Posterior object
sim_lpost1 = PSDPosterior(sim_ps.freq, sim_ps.power,
lpost1.model, m=sim_ps.m)
sim_lpost1.logprior = lpost1.logprior
sim_lpost2 = PSDPosterior(sim_ps.freq, sim_ps.power,
lpost2.model, m=sim_ps.m)
sim_lpost2.logprior = lpost2.logprior
parest_sim = PSDParEst(sim_ps, max_post=max_post)
lrt_sim[i], _, _ = parest_sim.compute_lrt(sim_lpost1, t1,
sim_lpost2, t2,
neg=neg,
max_post=max_post)
# now I can compute the p-value:
pval = _compute_pvalue(lrt_obs, lrt_sim)
return pval
pval = calibrate_lrt(ps, loglike, starting_pars,
loglike_bplc, bplc_start_pars,
max_post=False, nsim=100)
print("The p-value for rejecting the simpler model is: " + str(pval))
###Output
The p-value for rejecting the simpler model is: 0.97
###Markdown
As expected, the p-value for rejecting the powerlaw model is fairly large: since we simulated from that model, we would be surprised if it generated a small p-value, causing us to reject this model (note, however, that if the null hypothesis is true, the p-value will be uniformely distributed between 0 and 1. By definition, then, you will get a p-value smaller or equal to 0.01 in approximately one out of a hundred cases)We can do the same with the Bayesian model, in which case the result is called a *posterior predictive p-value*, which, in turn, is often used in posterior model checking (not yet implemented!).We have not yet defined a `PSDPosterior` object for the bent power law model, so let's do that. First, let's define some priors:
###Code
import scipy.stats
# flat prior for the power law indices
p_alpha1 = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
p_alpha2 = lambda alpha: ((-1. <= alpha) & (alpha <= 5.))
# flat prior for the break frequency
p_x_break = lambda xbreak: ((0.01 <= xbreak) & (10.0 >= xbreak))
# flat prior for the power law amplitude
p_amplitude = lambda amplitude: ((0.01 <= amplitude) & (amplitude <= 10.0))
# normal prior for the white noise parameter
p_whitenoise = lambda white_noise: scipy.stats.norm(2.0, 0.1).pdf(white_noise)
priors = {}
priors["alpha_1_0"] = p_alpha
priors["alpha_2_0"] = p_alpha
priors["amplitude_0"] = p_amplitude
priors["amplitude_1"] = p_whitenoise
priors["x_break_0"] = p_x_break
###Output
_____no_output_____
###Markdown
Now we can set up the `PSDPosterior` object:
###Code
lpost_bplc = PSDPosterior(ps.freq, ps.power, bplc, priors=priors, m=ps.m)
lpost_bplc(bplc_start_pars)
###Output
_____no_output_____
###Markdown
And do the posterior predictive p-value. Since we've already sampled from the simple model, we can pass that sample to the `calibrate_lrt` function, in order to cut down on computation time (if the keyword `sample` is not given, it will automatically run MCMC:
###Code
pval = calibrate_lrt(ps, lpost, starting_pars,
lpost_bplc, bplc_start_pars,
sample=sample.samples,
max_post=True, nsim=100)
print("The posterior predictive p-value is: p = " + str(pval))
###Output
The posterior predictive p-value is: p = 1.0
###Markdown
Again, we find that the p-value does not suggest rejecting the powerlaw model.Of course, a slightly modified version is implemented in `stingray` as a subclass of the `PSDParEst` class:
###Code
from stingray.modeling import PSDParEst
parest = PSDParEst(ps, fitmethod="BFGS")
pval = parest.calibrate_lrt(lpost, starting_pars, lpost_bplc, bplc_start_pars,
sample=sample.samples, nsim=100, max_post=True, seed=200)
print(pval)
###Output
0.2
###Markdown
Bayesian-ish QPO SearchesWhen searching for quasi-periodic oscillations (QPOs) in light curves that are not constant (for example because they are bursts or have other types of variability), one must take care that the variable background is accurately modelled (most standard tools assume that the light curve is constant). In [Vaughan et al, 2010](http://adsabs.harvard.edu/abs/2010MNRAS.402..307V), a method was introduced to search for QPOs in the presence of red noise (stochastic variability), and in [Huppenkothen et al, 2013](http://adsabs.harvard.edu/abs/2013ApJ...768...87H) it was extended to magnetar bursts, and in [Inglis et al, 2015](http://adsabs.harvard.edu/abs/2015ApJ...798..108I) and [Inglis et al, 2016](http://adsabs.harvard.edu/abs/2016ApJ...833..284I) a similar approach was used to find QPOs in solar flares.Based on a model for the broadband spectral noise, the algorithm finds the highest outlier in a test statistic based on the data-model residuals (under the assumption that if the broadband model is correct, the test statistic $T_R = \max_j(2 D_j/m_j)$ for $j$ power spectral bins with powers $D_j$ and model powers $m_j$ will be distributed following a $\chi^2$ distribution with two degrees of freedom). The observed test statistic $T_R$ is then compared to a theoretical distribution based on simulated power spectra without an outlier in order to compute a posterior predictive p-value as above for the likelihood ratio.Since the concept is very similar to that above, we do not show the full code here. Instead, the p-value can be calculated using the method `calibrate_highest_outlier`, which belongs to the `PSDParEst` class:
###Code
# compute highest outlier in the data, and the frequency and index
# where that power occurs
max_power, max_freq, max_ind = parest._compute_highest_outlier(lpost, res)
max_power
pval = parest.calibrate_highest_outlier(lpost, starting_pars, sample=sample,
max_post=True,
nsim=100, niter=200, nwalkers=500,
burnin=200, namestr="test")
pval
###Output
_____no_output_____
###Markdown
Convenience FunctionsFor convenience, we have implemented some simple functions to reduce overhead with having to instantiate objects of the various classes.Note that these convenience function use similar approaches and guesses in all cases; this might work for some simple quicklook analysis, but when preparing publication-ready results, one should approach the analysis with more care and make sure the options chosen are appropriate for the problem at hand. Fitting a power spectrum with some modelThe code above allows for a lot of freedom in building an appropriate model for your application. However, in everyday life, one might occasionally want to do a quick fit for various applications, without having to go too much into details. Below is a convenience function written for exactly that purpose.Please note that while this aims to use reasonable defaults, this is unlikely to produce publication-ready results!So let's fit a power law and a constant to some data, which we'll create below:
###Code
from stingray import Powerspectrum
m = 1
nfreq = 100000
freq = np.linspace(1, 1000, nfreq)
np.random.seed(100) # set the seed for the random number generator
noise = np.random.exponential(size=nfreq)
model = models.PowerLaw1D() + models.Const1D()
model.x_0_0.fixed = True
alpha_0 = 2.0
amplitude_0 = 100.0
amplitude_1 = 2.0
model.alpha_0 = alpha_0
model.amplitude_0 = amplitude_0
model.amplitude_1 = amplitude_1
p = model(freq)
power = noise * p
ps = Powerspectrum()
ps.freq = freq
ps.power = power
ps.m = m
ps.df = freq[1] - freq[0]
ps.norm = "leahy"
###Output
_____no_output_____
###Markdown
What does this data set look like?
###Code
plt.figure()
plt.loglog(ps.freq, ps.power, ds="steps-mid", lw=2, color="black")
###Output
_____no_output_____
###Markdown
In order to fit this, we'll write a convenience function that can take the power spectrum, a model, some starting parameters and just run with it:
###Code
from stingray.modeling import PSDLogLikelihood, PSDPosterior, PSDParEst
def fit_powerspectrum(ps, model, starting_pars, max_post=False, priors=None,
fitmethod="L-BFGS-B"):
if priors:
lpost = PSDPosterior(ps, model, priors=priors)
else:
lpost = PSDLogLikelihood(ps.freq, ps.power, model, m=ps.m)
parest = PSDParEst(ps, fitmethod=fitmethod, max_post=max_post)
res = parest.fit(lpost, starting_pars, neg=True)
return parest, res
###Output
_____no_output_____
###Markdown
Let's see if it works. We've already defined our model above, but to be explicit, let's define it again:
###Code
model_to_test = models.PowerLaw1D() + models.Const1D()
model_to_test.x_0_0.fixed = True
###Output
_____no_output_____
###Markdown
Now we just need some starting parameters:
###Code
t0 = [80, 1.5, 2.5]
parest, res = fit_powerspectrum(ps, model_to_test, t0)
res.p_opt
###Output
_____no_output_____
###Markdown
Looks like it worked! Let's plot the result, too:
###Code
plt.figure()
plt.figure()
plt.loglog(ps.freq, ps.power, ds="steps-mid", lw=2, color="black")
plt.plot(ps.freq, res.mfit, lw=3, color="red")
###Output
_____no_output_____
###Markdown
You can find the function in the `scripts` sub-module:
###Code
from stingray.modeling.scripts import fit_powerspectrum
parest, res = fit_powerspectrum(ps, model_to_test, t0)
res.p_opt
###Output
_____no_output_____
###Markdown
Fitting LorentziansFitting Lorentzians to power spectra is a routine task for most astronomers working with power spectra, hence there is a function that can produce either Maximum Likelihood or Maximum-A-Posteriori fits of the data.
###Code
l = models.Lorentz1D
l.param_names
def fit_lorentzians(ps, nlor, starting_pars, fit_whitenoise=True, max_post=False, priors=None,
fitmethod="L-BFGS-B"):
model = models.Lorentz1D()
if nlor > 1:
for i in range(nlor-1):
model += models.Lorentz1D()
if fit_whitenoise:
model += models.Const1D()
parest = PSDParEst(ps, fitmethod=fitmethod, max_post=max_post)
lpost = PSDPosterior(ps.freq, ps.power, model, priors=priors, m=ps.m)
res = parest.fit(lpost, starting_pars, neg=True)
return parest, res
###Output
_____no_output_____
###Markdown
Let's make a dataset so we can test it!
###Code
np.random.seed(400)
nlor = 3
x_0_0 = 0.5
x_0_1 = 2.0
x_0_2 = 7.5
amplitude_0 = 150.0
amplitude_1 = 50.0
amplitude_2 = 15.0
fwhm_0 = 0.1
fwhm_1 = 1.0
fwhm_2 = 0.5
whitenoise = 2.0
model = models.Lorentz1D(amplitude_0, x_0_0, fwhm_0) + \
models.Lorentz1D(amplitude_1, x_0_1, fwhm_1) + \
models.Lorentz1D(amplitude_2, x_0_2, fwhm_2) + \
models.Const1D(whitenoise)
p = model(ps.freq)
noise = np.random.exponential(size=len(ps.freq))
power = p*noise
plt.figure()
plt.loglog(ps.freq, power, lw=1, ds="steps-mid", c="black")
plt.loglog(ps.freq, p, lw=3, color="red")
###Output
_____no_output_____
###Markdown
Let's make this into a `Powerspectrum` object:
###Code
import copy
ps_new = copy.copy(ps)
ps_new.power = power
###Output
_____no_output_____
###Markdown
So now we can fit this model with our new function, but first, we need to define the starting parameters for our fit. The starting parameters will be `[amplitude, x_0, fwhm]` for each component plus the white noise component at the end:
###Code
t0 = [150, 0.4, 0.2, 50, 2.3, 0.6, 20, 8.0, 0.4, 2.1]
parest, res = fit_lorentzians(ps_new, nlor, t0)
###Output
_____no_output_____
###Markdown
Let's look at the output:
###Code
res.p_opt
###Output
_____no_output_____
###Markdown
Cool, that seems to work! For convenience `PSDParEst` also has a plotting function:
###Code
parest.plotfits(res, save_plot=False, namestr="lorentzian_test")
###Output
_____no_output_____
###Markdown
The function exists in the library as well for ease of use:
###Code
from stingray.modeling import fit_lorentzians
parest, res = fit_lorentzians(ps_new, nlor, t0)
res.p_opt
###Output
_____no_output_____ |
01 Data Analysis and Pre-processing/Visualization/04 Plotly Tutorial for Beginners.ipynb | ###Markdown
1. Line Charts Line Charts Example: Citation and Teaching vs World Rank of Top 100 Universities * Import graph_objs as *go* * Creating traces * x = x axis * y = y axis * mode = type of plot like marker, line or line + markers * name = name of the plots * marker = marker is used with dictionary. * color = color of lines. It takes RGB (red, green, blue) and opacity (alpha) * text = The hover text (hover is curser) * data = is a list that we add traces into it * layout = it is dictionary. * title = title of layout * x axis = it is dictionary * title = label of x axis * ticklen = length of x axis ticks * zeroline = showing zero line or not * fig = it includes data and layout* iplot() = plots the figure(fig) that is created by data and layout
###Code
# prepare data frame
df = timesData.iloc[:100,:]
# import graph objects as "go"
import plotly.graph_objs as go
# Creating trace1
trace1 = go.Scatter(
x = df.world_rank,
y = df.citations,
mode = "lines",
name = "citations",
marker = dict(color = 'rgba(16, 112, 2, 0.8)'),
text= df.university_name)
# Creating trace2
trace2 = go.Scatter(
x = df.world_rank,
y = df.teaching,
mode = "lines+markers",
name = "teaching",
marker = dict(color = 'rgba(80, 26, 80, 0.8)'),
text= df.university_name)
data = [trace1, trace2]
layout = dict(title = 'Citation and Teaching vs World Rank of Top 100 Universities',
xaxis= dict(title= 'World Rank',ticklen= 5,zeroline= False)
)
fig = dict(data = data, layout = layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
ScatterScatter Example: Citation vs world rank of top 100 universities with 2014, 2015 and 2016 years* Import graph_objs as *go** Creating traces * x = x axis * y = y axis * mode = type of plot like marker, line or line + markers * name = name of the plots * marker = marker is used with dictionary. * color = color of lines. It takes RGB (red, green, blue) and opacity (alpha) * text = The hover text (hover is curser)* data = is a list that we add traces into it* layout = it is dictionary. * title = title of layout * x axis = it is dictionary * title = label of x axis * ticklen = length of x axis ticks * zeroline = showing zero line or not * y axis = it is dictionary and same with x axis* fig = it includes data and layout* iplot() = plots the figure(fig) that is created by data and layout
###Code
# prepare data frames
df2014 = timesData[timesData.year == 2014].iloc[:100,:]
df2015 = timesData[timesData.year == 2015].iloc[:100,:]
df2016 = timesData[timesData.year == 2016].iloc[:100,:]
import plotly.graph_objs as go
# creating trace1
trace1 =go.Scatter(
x = df2014.world_rank,
y = df2014.citations,
mode = "markers",
name = "2014",
marker = dict(color = 'rgba(255, 128, 255, 0.8)'),
text= df2014.university_name)
# creating trace2
trace2 =go.Scatter(
x = df2015.world_rank,
y = df2015.citations,
mode = "markers",
name = "2015",
marker = dict(color = 'rgba(255, 128, 2, 0.8)'),
text= df2015.university_name)
# creating trace3
trace3 =go.Scatter(
x = df2016.world_rank,
y = df2016.citations,
mode = "markers",
name = "2016",
marker = dict(color = 'rgba(0, 255, 200, 0.8)'),
text= df2016.university_name)
data = [trace1, trace2, trace3]
layout = dict(title = 'Citation vs world rank of top 100 universities with 2014, 2015 and 2016 years',
xaxis= dict(title= 'World Rank',ticklen= 5,zeroline= False),
yaxis= dict(title= 'Citation',ticklen= 5,zeroline= False)
)
fig = dict(data = data, layout = layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
Bar ChartsFirst Bar Charts Example: citations and teaching of top 3 universities in 2014 (style1)* Import graph_objs as *go** Creating traces * x = x axis * y = y axis * mode = type of plot like marker, line or line + markers * name = name of the plots * marker = marker is used with dictionary. * color = color of lines. It takes RGB (red, green, blue) and opacity (alpha) * line = It is dictionary. line between bars * color = line color around bars * text = The hover text (hover is curser)* data = is a list that we add traces into it* layout = it is dictionary. * barmode = bar mode of bars like grouped* fig = it includes data and layout* iplot() = plots the figure(fig) that is created by data and layout
###Code
# prepare data frames
df2014 = timesData[timesData.year == 2014].iloc[:10,:]
import plotly.graph_objs as go
# create trace1
trace1 = go.Bar(
x = df2014.university_name,
y = df2014.citations,
name = "citations",
marker = dict(color = 'rgba(255, 174, 255, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)),
text = df2014.country)
# create trace2
trace2 = go.Bar(
x = df2014.university_name,
y = df2014.teaching,
name = "teaching",
marker = dict(color = 'rgba(255, 255, 128, 0.5)',
line=dict(color='rgb(0,0,0)',width=1.5)),
text = df2014.country)
data = [trace1, trace2]
layout = go.Layout(barmode = "group")
fig = go.Figure(data = data, layout = layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
Second Bar Charts Example: citations and teaching of top 3 universities in 2014 (style2) Actually, if you change only the barmode from *group* to *relative* in previous example, you achieve what we did here. However, for diversity I use different syntaxes. * Import graph_objs as *go** Creating traces * x = x axis * y = y axis * name = name of the plots * type = type of plot like bar plot* data = is a list that we add traces into it* layout = it is dictionary. * xaxis = label of x axis * barmode = bar mode of bars like grouped( previous example) or relative * title = title of layout* fig = it includes data and layout* iplot() = plots the figure(fig) that is created by data and layout
###Code
# prepare data frames
df2014 = timesData[timesData.year == 2014].iloc[:10,:]
import plotly.graph_objs as go
x = df2014.university_name
trace1 = {
'x': x,
'y': df2014.citations,
'name': 'citation',
'type': 'bar'
};
trace2 = {
'x': x,
'y': df2014.teaching,
'name': 'teaching',
'type': 'bar'
};
data = [trace1, trace2];
layout = {
'xaxis': {'title': 'Top 10 universities'},
'barmode': 'relative',
'title': 'citations and teaching of top 10 universities in 2014'
};
fig = go.Figure(data = data, layout = layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
Third Bar Charts Example: Horizontal bar charts. (style3) Citation vs income for universities* Import graph_objs as *go* and importing tools * Tools: used for subplots* Creating trace1 * bar: bar plot * x = x axis * y = y axis * marker * color: color of bars * line: bar line color and width * name: name of bar * orientation: orientation like horizontal * creating trace2 * scatter: scatter plot * x = x axis * y = y axis * mode: scatter type line line + markers or only markers * line: properties of line * color: color of line * name: name of scatter plot * layout: axis, legend, margin, paper and plot properties *
###Code
import plotly.graph_objs as go
from plotly import tools
df2016 = timesData[timesData.year == 2016].iloc[:7,:]
y_saving = [each for each in df2016.research]
y_net_worth = [float(each) for each in df2016.income]
x_saving = [each for each in df2016.university_name]
x_net_worth = [each for each in df2016.university_name]
trace0 = go.Bar(
x=y_saving,
y=x_saving,
marker=dict(color='rgba(171, 50, 96, 0.6)',line=dict(color='rgba(171, 50, 96, 1.0)',width=1)),
name='research',
orientation='h',
)
trace1 = go.Scatter(
x=y_net_worth,
y=x_net_worth,
mode='lines+markers',
line=dict(color='rgb(63, 72, 204)'),
name='income',
)
layout = dict(
title='Citations and income',
yaxis=dict(showticklabels=True, domain=[0, 0.85]),
yaxis2=dict(showline=True, showticklabels=False,
linecolor='rgba(102, 102, 102, 0.8)', linewidth=2, domain=[0, 0.85]),
xaxis=dict(zeroline=False, showline=False,
showticklabels=True, showgrid=True, domain=[0, 0.42]),
xaxis2=dict(zeroline=False, showline=False, showticklabels=True, showgrid=True,
domain=[0.47, 1], side='top', dtick=25),
legend=dict(x=0.029, y=1.038, font=dict(size=10) ),
margin=dict(l=200, r=20, t=70, b=70),
paper_bgcolor='rgb(248, 248, 255)',
plot_bgcolor='rgb(248, 248, 255)',
)
annotations = []
y_s = np.round(y_saving, decimals=2)
y_nw = np.rint(y_net_worth)
# Adding labels
for ydn, yd, xd in zip(y_nw, y_s, x_saving):
# labeling the scatter savings
annotations.append(dict(xref='x2', yref='y2', y=xd, x=ydn - 4,text='{:,}'.format(ydn),
font=dict(family='Arial', size=12, color='rgb(63, 72, 204)'), showarrow=False))
# labeling the bar net worth
annotations.append(dict(xref='x1', yref='y1', y=xd, x=yd + 3,text=str(yd),
font=dict(family='Arial', size=12, color='rgb(171, 50, 96)'), showarrow=False))
layout['annotations'] = annotations
# Creating two subplots
fig = tools.make_subplots(rows=1, cols=2,
specs=[[{}, {}]],
shared_xaxes=True, shared_yaxes=False, vertical_spacing=0.001)
fig.append_trace(trace0, 1, 1)
fig.append_trace(trace1, 1, 2)
fig['layout'].update(layout)
iplot(fig)
###Output
C:\anaconda3\envs\keras\lib\site-packages\plotly\tools.py:465: DeprecationWarning:
plotly.tools.make_subplots is deprecated, please use plotly.subplots.make_subplots instead
###Markdown
Pie ChartsPie Charts Example: Students rate of top 7 universities in 2016* fig: create figures * data: plot type * values: values of plot * labels: labels of plot * name: name of plots * hoverinfo: information in hover * hole: hole width * type: plot type like pie * layout: layout of plot * title: title of layout * annotations: font, showarrow, text, x, y
###Code
# data preparation
df2016 = timesData[timesData.year == 2016].iloc[:7,:]
pie1 = df2016.num_students
pie1_list = [float(each.replace(',', '.')) for each in df2016.num_students] # str(2,4) => str(2.4) = > float(2.4) = 2.4
labels = df2016.university_name
# figure
fig = {
"data": [
{
"values": pie1_list,
"labels": labels,
"domain": {"x": [0, .5]},
"name": "Number Of Students Rates",
"hoverinfo":"label+percent+name",
"hole": .3,
"type": "pie"
},],
"layout": {
"title":"Universities Number of Students rates",
"annotations": [
{ "font": { "size": 20},
"showarrow": False,
"text": "Number of Students",
"x": 0.1,
"y": 1.1,
},
]
}
}
iplot(fig)
###Output
_____no_output_____
###Markdown
Bubble ChartsBubble Charts Example: University world rank (first 20) vs teaching score with number of students(size) and international score (color) in 2016* x = x axis* y = y axis* mode = markers(scatter)* marker = marker properties * color = third dimension of plot. Internaltional score * size = fourth dimension of plot. Number of students* text: university names
###Code
# data preparation
df2016 = timesData[timesData.year == 2016].iloc[:20,:]
num_students_size = [float(each.replace(',', '.')) for each in df2016.num_students]
international_color = [float(each) for each in df2016.international]
data = [
{
'y': df2016.teaching,
'x': df2016.world_rank,
'mode': 'markers',
'marker': {
'color': international_color,
'size': num_students_size,
'showscale': True
},
"text" : df2016.university_name
}
]
iplot(data)
###Output
_____no_output_____
###Markdown
HistogramLets look at histogram of students-staff ratio in 2011 and 2012 years. * trace1 = first histogram * x = x axis * y = y axis * opacity = opacity of histogram * name = name of legend * marker = color of histogram* trace2 = second histogram* layout = layout * barmode = mode of histogram like overlay. Also you can change it with *stack*
###Code
# prepare data
x2011 = timesData.student_staff_ratio[timesData.year == 2011]
x2012 = timesData.student_staff_ratio[timesData.year == 2012]
trace1 = go.Histogram(
x=x2011,
opacity=0.75,
name = "2011",
marker=dict(color='rgba(171, 50, 96, 0.6)'))
trace2 = go.Histogram(
x=x2012,
opacity=0.75,
name = "2012",
marker=dict(color='rgba(12, 50, 196, 0.6)'))
data = [trace1, trace2]
layout = go.Layout(barmode='overlay',
title=' students-staff ratio in 2011 and 2012',
xaxis=dict(title='students-staff ratio'),
yaxis=dict( title='Count'),
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
Word CloudNot a pyplot but learning it is good for visualization. Lets look at which country is mentioned most in 2011.* WordCloud = word cloud library that I import at the beginning of kernel * background_color = color of back ground * generate = generates the country name list(x2011) a word cloud
###Code
# data prepararion
x2011 = timesData.country[timesData.year == 2011]
plt.subplots(figsize=(8,8),dpi=300)
wordcloud = WordCloud(
background_color='white',
width=512,
height=384
).generate(" ".join(x2011))
plt.imshow(wordcloud)
plt.axis('off')
plt.savefig('graph.png')
plt.show()
###Output
_____no_output_____
###Markdown
Box Plots* Box Plots * Median (50th percentile) = middle value of the data set. Sort and take the data in the middle. It is also called 50% percentile that is 50% of data are less that median(50th quartile)(quartile) * 25th percentile = quartile 1 (Q1) that is lower quartile * 75th percentile = quartile 3 (Q3) that is higher quartile * height of box = IQR = interquartile range = Q3-Q1 * Whiskers = 1.5 * IQR from the Q1 and Q3 * Outliers = being more than 1.5*IQR away from median commonly. * trace = box * y = data we want to visualize with box plot * marker = color
###Code
# data preparation
x2015 = timesData[timesData.year == 2015]
trace0 = go.Box(
y=x2015.total_score,
name = 'total score of universities in 2015',
marker = dict(
color = 'rgb(12, 12, 140)',
)
)
trace1 = go.Box(
y=x2015.research,
name = 'research of universities in 2015',
marker = dict(
color = 'rgb(12, 128, 128)',
)
)
data = [trace0, trace1]
iplot(data)
###Output
_____no_output_____
###Markdown
Scatter Matrix PlotsScatter Matrix = it helps us to see covariance and relation between more than 2 features* import figure factory as ff* create_scatterplotmatrix = creates scatter plot * data2015 = prepared data. It includes research, international and total scores with index from 1 to 401 * colormap = color map of scatter plot * colormap_type = color type of scatter plot * height and weight
###Code
import plotly.figure_factory as ff
dataframe = timesData[timesData.year == 2015]
data2015 = dataframe.loc[:,["research","international", "total_score"]]
data2015["index"] = np.arange(1,len(data2015)+1)
# scatter matrix
fig = ff.create_scatterplotmatrix(data2015, diag='box', index='index', colormap='Portland',
colormap_type='cat',
height=700, width=700)
iplot(fig)
###Output
_____no_output_____
###Markdown
Inset Plots
###Code
# first line plot
trace1 = go.Scatter(
x=dataframe.world_rank,
y=dataframe.teaching,
name = "teaching",
marker = dict(color = 'rgba(16, 112, 2, 0.8)'),
)
# second line plot
trace2 = go.Scatter(
x=dataframe.world_rank,
y=dataframe.income,
xaxis='x2',
yaxis='y2',
name = "income",
marker = dict(color = 'rgba(160, 112, 20, 0.8)'),
)
data = [trace1, trace2]
layout = go.Layout(
xaxis2=dict(
domain=[0.6, 0.95],
anchor='y2',
),
yaxis2=dict(
domain=[0.6, 0.95],
anchor='x2',
),
title = 'Income and Teaching vs World Rank of Universities'
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
3D Scatter Plot with Colorscaling3D Scatter: Sometimes 2D is not enough to understand data. Therefore adding one more dimension increase the intelligibility of the data. Even we will add color that is actually 4th dimension.* go.Scatter3d: create 3d scatter plot* x,y,z: axis of plots* mode: market that is scatter* size: marker size* color: axis of colorscale* colorscale: actually it is 4th dimension
###Code
# create trace 1 that is 3d scatter
trace1 = go.Scatter3d(
x=dataframe.world_rank,
y=dataframe.research,
z=dataframe.citations,
mode='markers',
marker=dict(
size=10,
color='rgb(255,0,0)', # set color to an array/list of desired values
)
)
data = [trace1]
layout = go.Layout(
margin=dict(
l=0,
r=0,
b=0,
t=0
)
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
###Output
_____no_output_____
###Markdown
Multiple SubplotsMultiple Subplots: While comparing more than one features, multiple subplots can be useful.
###Code
trace1 = go.Scatter(
x=dataframe.world_rank,
y=dataframe.research,
name = "research"
)
trace2 = go.Scatter(
x=dataframe.world_rank,
y=dataframe.citations,
xaxis='x2',
yaxis='y2',
name = "citations"
)
trace3 = go.Scatter(
x=dataframe.world_rank,
y=dataframe.income,
xaxis='x3',
yaxis='y3',
name = "income"
)
trace4 = go.Scatter(
x=dataframe.world_rank,
y=dataframe.total_score,
xaxis='x4',
yaxis='y4',
name = "total_score"
)
data = [trace1, trace2, trace3, trace4]
layout = go.Layout(
xaxis=dict(
domain=[0, 0.45]
),
yaxis=dict(
domain=[0, 0.45]
),
xaxis2=dict(
domain=[0.55, 1]
),
xaxis3=dict(
domain=[0, 0.45],
anchor='y3'
),
xaxis4=dict(
domain=[0.55, 1],
anchor='y4'
),
yaxis2=dict(
domain=[0, 0.45],
anchor='x2'
),
yaxis3=dict(
domain=[0.55, 1]
),
yaxis4=dict(
domain=[0.55, 1],
anchor='x4'
),
title = 'Research, citation, income and total score VS World Rank of Universities'
)
fig = go.Figure(data=data, layout=layout)
iplot(fig)
###Output
_____no_output_____ |
notebooks/exploratory/rdnfn-1-geopandas.ipynb | ###Markdown
Comparison between Biotope and Vegetation dataCompares the two different shape files found in the Chernobyl data set.
###Code
import geopandas as gpd
import pathlib
import matplotlib.pyplot as plt
# from src.constants import GWS_DATA_DIR
GWS_DATA_DIR = pathlib.Path("/gws/nopw/j04/ai4er/guided-team-challenge/2021/biodiversity")
# Getting biotope data
bio_path = GWS_DATA_DIR / "chernobyl_habitat_data" / "Biotope_EUNIS_ver1.shp"
bio_data = gpd.read_file(bio_path)
# getting vegetation data
veg_path = GWS_DATA_DIR / "chernobyl_habitat_data" / "Vegetation_mape.shp"
veg_data = gpd.read_file(veg_path)
fig, (ax1,ax2) = plt.subplots(1,2, figsize=(10,4))
ax1.set_title("bio_data")
bio_data.plot(ax=ax1)
ax2.set_title("veg_data")
veg_data.plot(ax=ax2)
import folium
# TODO: fix coordinates to actual location
m2 = folium.Map([51.386998452, 30.092666296],
zoom_start=8,
tiles='cartodbpositron')
# This block adds the data provided by Tom and Adham
# This adds a number for each category for color coding
bio_data['Eunis_name_num'] = bio_data.Eunis_name.astype('category').cat.codes.astype('int64')
# Adding the colored polygons for both datasets
bio_choropleth = folium.Choropleth(bio_data, data=bio_data, key_on='feature.properties.OBJECTID',
columns=['OBJECTID','Eunis_name_num'], fill_color= 'YlOrBr',
name="bio_data")
bio_choropleth.add_to(m2)
# Adding the labels
folium.features.GeoJsonPopup(fields=['Eunis_name'], labels=True ).add_to(bio_choropleth.geojson)
veg_data['index'] = veg_data.index
veg_choropleth = folium.Choropleth(veg_data, data=veg_data, key_on='feature.properties.index',
columns=['index','Vegetation'], fill_color='YlOrBr',
name="veg_data")
veg_choropleth.add_to(m2)
# Adding more layers (satellite and openstreetmap)
folium.TileLayer(tiles='OpenStreetMap').add_to(m2)
folium.TileLayer(
tiles = 'https://server.arcgisonline.com/ArcGIS/rest/services/World_Imagery/MapServer/tile/{z}/{y}/{x}',
attr = 'Esri',
name = 'Esri Satellite',
overlay = False,
control = True
).add_to(m2)
# Adding geojson files of exclusion zone from Simon Mathis
exclusion_json_path = GWS_DATA_DIR / "chernobyl_exclusion_zone_v1.geojson"
exc_data = gpd.read_file(exclusion_json_path)
def get_style_function(color = '#ff0000'):
return lambda x: {'fillColor': color, 'color': color}
colors = ['#000000','#ffff99','#ff9933','#990000','#ff0000','#000000']
for index, row in exc_data.iterrows():
folium.GeoJson(row['geometry'], name=row['name'], style_function=get_style_function(colors[index])).add_to(m2)
# Adding layer control legend
# (needs to be after all layers added)
folium.LayerControl().add_to(m2)
m2
# Btw this is probably where EUNIS comes from:
# https://eunis.eea.europa.eu/
display("Bio_data", bio_data.head(3))
display("Veg_data", veg_data.head(3))
# Get a list of the Eunis labels
bio_data.Eunis_name.unique().tolist()
#print("Number of polygons:", len(bio_data))
len(veg_data)
len(bio_data)
merged_data = veg_data.merge(bio_data, on='AREA') # validate="one_to_one")
print(len(merged_data))
merged_data.head()
# Number of double area occurances
len(veg_data) - len(veg_data.AREA.value_counts())
len(bio_data) - len(bio_data.AREA.value_counts())
###Output
_____no_output_____ |
notebooks/RNAseq_SNV_WF_DEV.ipynb | ###Markdown
Create example bash calls from workflowWorks best with non-restarted WF
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
task_id = "9aeaeaf3-0e59-4c92-a709-c6bd37431294"
out_file = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-06-22_VEP.tsv", "w")
# task_id = "3c20cc8e-18d7-43f2-bc2c-4a76d38a88f8"
task = api.tasks.get(task_id)
jobs = {}
temp = {}
for job in task.get_execution_details().jobs:
if job.status == "COMPLETED":
check = job.name.split('_')
cmd = job.command_line
if job.command_line == None:
# pdb.set_trace()
cmd = "embedded script or task retry"
sys.stderr.write("WARN: Job " + job.name + " had null cmd\n")
if check[-1] == "s":
key = "_".join(check[:-2])
if key not in temp:
jobs[job.start_time] = {}
jobs[job.start_time][key] = cmd
temp[key] = 1
else:
temp[key] += 1
else:
jobs[job.start_time] = {}
jobs[job.start_time][job.name] = cmd
out_file.write("Step\tType\tNum scatter\tCommand\n")
for rtime in sorted(jobs.keys()):
for key in jobs[rtime]:
rtype = "run step"
sct = "NA"
if key in temp and temp[key] > 1:
rtype = "scatter"
sct = str(temp[key])
cmds = jobs[rtime][key].split('\n')
for cmd in cmds:
out_file.write(key + "\t" + rtype + "\t" + sct + "\t" + cmd + "\n")
out_file.close()
###Output
_____no_output_____
###Markdown
Convert tsv to markdown table
###Code
import sys
# max desired col width
max_w = 200
tsv_in = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-06-22_VEP.tsv")
out_md = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-06-22_VEP.md", "w")
data = []
max_len = []
for line in tsv_in:
info = line.rstrip('\n').split('\t')
data.append(info)
if len(max_len) == 0:
for item in info:
max_len.append(len(item))
else:
for i in range(len(max_len)):
if len(info[i]) > max_w:
max_len[i] = max_w
elif len(info[i]) > max_len[i]:
max_len[i] = len(info[i])
# print header first
d_ct = []
for i in range(len(data[0])):
d_ct.append(len(data[0][i]))
out_md.write(" | " + data[0][i] + "".join([" "] * max_len[i]))
d_ct[i] += max_len[i]
out_md.write(" |\n")
for i in range(len(data[0])):
out_md.write(" | " + "".join(["-"] * d_ct[i]))
out_md.write(" |\n")
# pdb.set_trace()
for i in range(1, len(data), 1):
for j in range(len(data[i])):
d_ct = len(data[i][j]) + 2
out_md.write(" | " + data[i][j] + "".join([" "] * max_len[j]))
d_ct += max_len[j]
out_md.write(" |\n")
out_md.close()
###Output
_____no_output_____
###Markdown
Get run times Get run times by step
###Code
def get_job_run_time(task, phrase):
data = []
if re.search(phrase, task.name):
try:
for job in task.get_execution_details().jobs:
if job.status != "COMPLETED":
sys.stderr.write("Skipping job likely killed due to spot instance kill for " + job.name + " from task " + task.id + "\n")
else:
data.append([job.name, str((job.end_time-job.start_time).seconds/3600)])
# pdb.set_trace()
hold=1
return task.id, task.name, str(task.price.amount), str((task.end_time - task.start_time).seconds/3600), data
except Exception as e:
return [e, task.id]
else:
return []
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
phrase = "VEP R100 ANNOTATE"
tasks = api.tasks.query(project=project, status="COMPLETED").all()
actual_out = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/cost_est/VEP_cov-irt-actual_cost.txt", "w")
actual_out.write("Task name\tTask ID\tCost\tRun Time in hours\n")
step_run = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/cost_est/VEP_cov-irt_step_run_times.txt", "w")
step_run.write("Run step\tRun time in hours\n")
# for task in tasks:
# result = get_job_run_time(task, phrase)
# if len(result) > 0:
# pdb.set_trace()
# actual_out.write("\t".join(result[0:4]) + "\n")
# for step in result[4]:
# step_run.write("\t".join(step) + "\n")
x = 0
m = 100
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(get_job_run_time, task, phrase): task for task in tasks}
for result in concurrent.futures.as_completed(results):
if len(result.result()) > 2:
if x % m == 0:
sys.stderr.write("Processed " + str(x) + " valid tasks\n")
actual_out.write("\t".join(result.result()[0:4]) + "\n")
for step in result.result()[4]:
step_run.write("\t".join(step) + "\n")
x += 1
elif len(result.result()) == 2:
sys.stderr.write(str(result.result()[0]) + "\tFailed processing task ID " + result.result()[1] + "\n")
exit(1)
actual_out.close()
step_run.close()
###Output
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task fc616f63-b3b1-4c5e-af7b-ff7320eee644
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task f5090a20-6c22-4797-be41-7d358a7db164
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task 8a02e36f-e613-4d0d-a0f3-f1038463cca1
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task 2622d457-209a-466b-84db-1abdb677d7e6
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task cd1d9077-c5a2-4433-bc16-b74e44f02c05
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task b43b42bf-6cb2-4d08-84b3-9cd1e30d875b
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task ead6d714-32dd-4975-9272-182f5b857ca5
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task 92e3dfad-a4d7-4a31-8d01-48009bff1e76
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task f88487a0-5dc5-4169-b454-6aafbacf4f77
Skipping job likely killed due to spot instance kill for vep-1oo-annotate from task 1fb00414-01c8-4355-886b-d22f9723f3ac
Processed 0 valid tasks
Processed 100 valid tasks
Processed 200 valid tasks
Processed 300 valid tasks
Processed 400 valid tasks
Processed 500 valid tasks
Processed 600 valid tasks
Processed 700 valid tasks
###Markdown
Tag source files
###Code
def tag_file(info, header):
try:
meta = info.rstrip('\n').split('\t')
f_obj = api.files.get(meta[0])
metadata = {}
for i in range(3, len(header), 1):
metadata[header[i]] = meta[i]
f_obj.metadata = metadata
f_obj.save()
except Exception as e:
sys.stderr.write(str(e) + "\n")
sys.stderr.write("Could not process " + info)
exit(1)
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/manifests/covwc_to_tag.txt")
head = next(manifest)
header = head.rstrip("\n").split("\t")
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(tag_file, line, header): line for line in manifest}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' files\n')
sys.stderr.flush()
###Output
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb671e4b0a6d31133e684 COVWC-20200312-P2-E02-P.all-reads_Aligned.sortedByCoord.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb672e4b0a6d31133e6a7 COVWC-20200312-P2-E02-P.all-reads_ReadsPerGene.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb672e4b0a6d31133e693 COVWC-20200312-P2-E02-P.all-reads_Chimeric.out.junction d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb673e4b0a6d31133e6e2 COVWC-20200313-P4-C01-P.all-reads_Log.final.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
Could not process 5eebb672e4b0a6d31133e6a2 COVWC-20200312-P2-E02-P.all-reads_Log.progress.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb673e4b0a6d31133e6d3 COVWC-20200313-P4-C01-P.all-reads_Aligned.sortedByCoord.out.bam.bai d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb672e4b0a6d31133e698 COVWC-20200312-P2-E02-P.all-reads_Log.final.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb672e4b0a6d31133e69d COVWC-20200312-P2-E02-P.all-reads_Log.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb671e4b0a6d31133e689 COVWC-20200312-P2-E02-P.all-reads_Aligned.sortedByCoord.out.bam.bai d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
Could not process 5eebb673e4b0a6d31133e6e7 COVWC-20200313-P4-C01-P.all-reads_Log.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb674e4b0a6d31133e6f6 COVWC-20200313-P4-C01-P.all-reads_SJ.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb674e4b0a6d31133e6f1 COVWC-20200313-P4-C01-P.all-reads_ReadsPerGene.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb673e4b0a6d31133e6dd COVWC-20200313-P4-C01-P.all-reads_Chimeric.out.junction d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb673e4b0a6d31133e6d8 COVWC-20200313-P4-C01-P.all-reads_Aligned.toTranscriptome.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
Could not process 5eebb674e4b0a6d31133e6ec COVWC-20200313-P4-C01-P.all-reads_Log.progress.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb673e4b0a6d31133e6ce COVWC-20200313-P4-C01-P.all-reads_Aligned.sortedByCoord.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-C01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb672e4b0a6d31133e6ac COVWC-20200312-P2-E02-P.all-reads_SJ.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb676e4b0a6d31133e740 COVWC-20200313-P4-D01-P.all-reads_SJ.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb671e4b0a6d31133e68e COVWC-20200312-P2-E02-P.all-reads_Aligned.toTranscriptome.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200312-P2-E02-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb676e4b0a6d31133e72c COVWC-20200313-P4-D01-P.all-reads_Log.final.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
Could not process 5eebb675e4b0a6d31133e722 COVWC-20200313-P4-D01-P.all-reads_Aligned.toTranscriptome.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb677e4b0a6d31133e762 COVWC-20200313-P4-F01-N.all-reads_Aligned.sortedByCoord.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb678e4b0a6d31133e771 COVWC-20200313-P4-F01-N.all-reads_Chimeric.out.junction d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb678e4b0a6d31133e77b COVWC-20200313-P4-F01-N.all-reads_Log.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
Could not process 5eebb678e4b0a6d31133e776 COVWC-20200313-P4-F01-N.all-reads_Log.final.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
Could not process 5eebb677e4b0a6d31133e76c COVWC-20200313-P4-F01-N.all-reads_Aligned.toTranscriptome.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
Could not process 5eebb677e4b0a6d31133e767 COVWC-20200313-P4-F01-N.all-reads_Aligned.sortedByCoord.out.bam.bai d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb676e4b0a6d31133e731 COVWC-20200313-P4-D01-P.all-reads_Log.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
Could not process 5eebb675e4b0a6d31133e718 COVWC-20200313-P4-D01-P.all-reads_Aligned.sortedByCoord.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb678e4b0a6d31133e78a COVWC-20200313-P4-F01-N.all-reads_SJ.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ae4b0a6d31133e7c7 COVWC-20200313-P4-G01-N.all-reads_Log.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb679e4b0a6d31133e7ae COVWC-20200313-P4-G01-N.all-reads_Aligned.sortedByCoord.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb679e4b0a6d31133e7bd COVWC-20200313-P4-G01-N.all-reads_Chimeric.out.junction d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ae4b0a6d31133e7c2 COVWC-20200313-P4-G01-N.all-reads_Log.final.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ae4b0a6d31133e7cc COVWC-20200313-P4-G01-N.all-reads_Log.progress.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb676e4b0a6d31133e727 COVWC-20200313-P4-D01-P.all-reads_Chimeric.out.junction d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb679e4b0a6d31133e7b8 COVWC-20200313-P4-G01-N.all-reads_Aligned.toTranscriptome.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
Could not process 5eebb679e4b0a6d31133e7b3 COVWC-20200313-P4-G01-N.all-reads_Aligned.sortedByCoord.out.bam.bai d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
Could not process 5eebb676e4b0a6d31133e736 COVWC-20200313-P4-D01-P.all-reads_Log.progress.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
Could not process 5eebb678e4b0a6d31133e780 COVWC-20200313-P4-F01-N.all-reads_Log.progress.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb678e4b0a6d31133e785 COVWC-20200313-P4-F01-N.all-reads_ReadsPerGene.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-F01-N Negative
Could not process 5eebb676e4b0a6d31133e73b COVWC-20200313-P4-D01-P.all-reads_ReadsPerGene.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb675e4b0a6d31133e71d COVWC-20200313-P4-D01-P.all-reads_Aligned.sortedByCoord.out.bam.bai d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-D01-P Positive
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ae4b0a6d31133e7d6 COVWC-20200313-P4-G01-N.all-reads_SJ.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-G01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ce4b0a6d31133e80c COVWC-20200313-P4-H01-N.all-reads_Log.final.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ce4b0a6d31133e816 COVWC-20200313-P4-H01-N.all-reads_Log.progress.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67be4b0a6d31133e802 COVWC-20200313-P4-H01-N.all-reads_Aligned.toTranscriptome.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
Could not process 5eebb67be4b0a6d31133e7fd COVWC-20200313-P4-H01-N.all-reads_Aligned.sortedByCoord.out.bam.bai d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ce4b0a6d31133e81b COVWC-20200313-P4-H01-N.all-reads_ReadsPerGene.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ce4b0a6d31133e820 COVWC-20200313-P4-H01-N.all-reads_SJ.out.tab d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ce4b0a6d31133e807 COVWC-20200313-P4-H01-N.all-reads_Chimeric.out.junction d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
No relevant changes were detected in order to update the resource on the server.
No relevant changes were detected in order to update the resource on the server.
Could not process 5eebb67ce4b0a6d31133e811 COVWC-20200313-P4-H01-N.all-reads_Log.out d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
Could not process 5eebb67be4b0a6d31133e7f8 COVWC-20200313-P4-H01-N.all-reads_Aligned.sortedByCoord.out.bam d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study COVWC-20200313-P4-H01-N Negative
###Markdown
Set up GATK4 RNAseq WF Tasks
###Code
def get_gatk_refs(api, project):
try:
ref_dict = {}
ref_dict['reference_fasta'] = api.files.get('5eebc5d1e4b0a6d311357eb9')
ref_dict['reference_dict'] = api.files.get('5eecb14ae4b0efd899f474da')
known_sites = []
known_sites.append(api.files.get('5eecd8c3e4b0efd899f4ae44'))
known_sites.append(api.files.get('5eecd846e4b0efd899f4ae27'))
known_sites.append(api.files.get('5eecd846e4b0efd899f4ae26'))
known_sites.append(api.files.get('5eecd846e4b0efd899f4ae21'))
ref_dict['knownsites'] = known_sites
ref_dict['call_bed_file'] = api.files.get('5eebdeece4b0efd899f43eaa')
ref_dict['dbsnp_vcf'] = api.files.get('5eecd846e4b0efd899f4ae24')
ref_dict['tool_name'] = 'STAR_GATK4'
except Exception as e:
sys.stderr.write(str(e) + "\nFailed to get REFS\n")
exit(1)
return ref_dict
def draft_task(in_file):
try:
input_dict = {}
for key in ref_obj:
input_dict[key] = ref_obj[key]
info = in_file.rstrip('\n').split('\t')
input_dict['STAR_sorted_genomic_bam'] = api.files.get(info[0])
task_name = "GATK RNAseq SNV: " + info[3]
task = api.tasks.create(name=task_name, project=project, app=app_name, inputs=input_dict, run=False)
task.inputs['output_basename'] = task.id
task.save()
except Exception as e:
sys.stderr.write(str(e) + "\nfailed to set up task for " + in_file)
exit(1)
project = 'd3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study'
app_name = project + "/d3b-gatk-rnaseq-snv-wf"
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/manifests/bams_for_gatk_to_run.tsv")
head = next(manifest)
ref_obj = get_gatk_refs(api, project)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(draft_task, line ): line for line in manifest}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' tasks\n')
sys.stderr.flush()
###Output
_____no_output_____
###Markdown
Run VEP
###Code
def get_vep_refs(api, project):
try:
ref_dict = {}
ref_dict['reference'] = api.files.get('5eebc5d1e4b0a6d311357eb9')
ref_dict['cache'] = api.files.get('5eed0f54e4b0efd899f4afda')
ref_dict['merged_cache'] = True
ref_dict['tool_name'] = 'STAR_GATK4'
except Exception as e:
sys.stderr.write(str(e) + "\nFailed to get REFS\n")
exit(1)
return ref_dict
def draft_vep_task(in_file):
try:
input_dict = {}
for key in ref_obj:
input_dict[key] = ref_obj[key]
info = in_file.rstrip('\n').split(',')
input_dict['input_vcf'] = api.files.get(info[0])
task_name = "VEP R100 ANNOTATE: " + info[3]
task = api.tasks.create(name=task_name, project=project, app=app_name, inputs=input_dict, run=False)
task.inputs['output_basename'] = task.id
task.save()
except Exception as e:
sys.stderr.write(str(e) + "\nfailed to set up task for " + in_file)
exit(1)
project = 'd3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study'
app_name = project + "/vep-1oo-annotate"
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/manifests/vcf_to_annotate-manifest.csv")
head = next(manifest)
ref_obj = get_vep_refs(api, project)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(draft_vep_task, line ): line for line in manifest}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' tasks\n')
sys.stderr.flush()
###Output
_____no_output_____
###Markdown
Copy metadata to outputs
###Code
def add_metadata_to_outputs(task, phrase, in_key):
if re.search(phrase, task.name):
sys.stderr.write('Valid task found ' + task.name + '\n')
metadata = task.inputs[in_key].metadata
for out_key in task.outputs:
# pdb.set_trace()
try:
if type(task.outputs[out_key]) is not list:
file_obj = api.files.get(task.outputs[out_key].id)
for key in metadata:
file_obj.metadata[key] = metadata[key]
file_obj.save()
else:
for output in task.outputs[out_key]:
if type(output) is not list:
file_obj = api.files.get(output.id)
for key in metadata:
file_obj.metadata[key] = metadata[key]
file_obj.save()
else:
for item in output:
if item is not None:
file_obj = api.files.get(item.id)
for key in metadata:
file_obj.metadata[key] = metadata[key]
file_obj.save()
except Exception as e:
print(e)
print("Skipping " + out_key + " for " + task.name + " due to error")
prefix = 'VEP R100 ANNOTATE'
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
key = 'input_vcf'
print("You sure tag outputs with task prefix: " + prefix + "? Type \"YASS\" if so")
check = input()
if check == "YASS":
tasks = api.tasks.query(project=project, status="COMPLETED").all()
for task in tasks:
add_metadata_to_outputs(task, prefix, key)
else:
sys.stderr.write("User did not type YASS, skipping\n")
###Output
You sure tag outputs with task prefix: VEP R100 ANNOTATE? Type "YASS" if so
YASS
###Markdown
Get task outputs
###Code
def write_to_manifest(out_fh, file_obj, out_key, task_name):
out_fh.write(",".join([file_obj.id, file_obj.name, out_key, task_name]) + "\n")
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="COMPLETED").all()
out = open('/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/manifests/test_vep_out.txt', 'w')
out.write("id,name,output_category,task_name\n")
phrase = "VEP R100 ANNOTATE: COVHA-20200311-P1-A01-CP"
for task in tasks:
if re.search(phrase, task.name):
sys.stderr.write('Processing task: ' + task.name + "\n")
for out_key in task.outputs:
# try:
if type(task.outputs[out_key]) is not list:
file_obj = task.outputs[out_key]
write_to_manifest(out, file_obj, out_key, task.name)
if task.outputs[out_key].secondary_files is not None:
write_to_manifest(out, task.outputs[out_key].secondary_files[0], out_key, task.name)
else:
for i in range(len(task.outputs[out_key])):
if type(task.outputs[out_key][i]) is not list:
write_to_manifest(out, task.outputs[out_key][i], out_key, task.name)
if task.outputs[out_key][i].secondary_files is not None:
write_to_manifest(out, task.outputs[out_key][i].secondary_files[0], out_key, task.name)
else:
for j in range(len(task.outputs[out_key][i])):
if task.outputs[out_key][i][j] is not None:
write_to_manifest(out, task.outputs[out_key][i][j], out_key, task.name)
if task.outputs[out_key][i][j].secondary_files is not None:
write_to_manifest(out, task.outputs[out_key][i][j].secondary_files[0], out_key, task.name)
# except Exception as e:
# print(e)
# print("Skipping " + out_key + " for " + task.name + " due to error")
out.close()
###Output
Processing task: VEP R100 ANNOTATE: COVHA-20200311-P1-A01-CP
###Markdown
Abort unresponsive and restart tasks
###Code
import datetime
import pytz
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="RUNNING")
current = datetime.datetime.now()
tz = pytz.timezone('America/New_York')
prefix = "GATK RNAseq SNV"
task_abort = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/TASK_RUN/aborted_and_restarted2.log", 'w')
task_abort.write("Task ID\tTask name\tNew Task ID")
for task in tasks:
if re.search(prefix, task.name):
for job in task.get_execution_details().jobs:
if job.name == "preprocess_rnaseq_bam_sambamba_md_sorted":
if job.status == "RUNNING":
diff = (current-pytz.utc.localize(job.start_time, is_dst=None).astimezone(tz).replace(tzinfo=None)).seconds/3600
if diff > 2:
task_abort.write(task.id + "\t" + task.name)
in_dict = {}
sys.stderr.write("Aborting " + task.id + "\t" + task.name + "\n" )
task.abort()
new_task = task.clone(run=False)
new_task.inputs['output_basename'] = new_task.id
new_task.save()
task_abort.write("\t" + new_task.id + "\n")
task_abort.flush()
else:
break
else:
break
task_abort.close()
###Output
Aborting 2b233b53-4e04-4691-ba01-114e806a27b6 GATK RNAseq SNV: COVHA-20200403-P2-B06-N
Aborting 26c0855b-3116-48fa-9f6f-680b3eadd3ef GATK RNAseq SNV: COVHA-20200403-P2-E04-N
Aborting 44d400a2-4d1f-464e-93cf-32668e5da436 GATK RNAseq SNV: COVHA-20200314-P7-C08-P
###Markdown
Restart failed tasks
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="FAILED")
prefix = "GATK RNAseq SNV"
task_restart = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/TASK_RUN/failed_and_restarted2.log", 'w')
task_restart.write("Task ID\tTask name\tNew Task ID\n")
run_list = ["GATK RNAseq SNV: COVHA-20200315-P9-F02-N",
"GATK RNAseq SNV: COVHA-20200316-P12-F01-P","GATK RNAseq SNV: COVHA-20200403-P1-C06-P"]
for task in tasks:
if re.search(prefix, task.name) and task.name in run_list:
task_restart.write(task.id + "\t" + task.name)
new_task = task.clone(run=False)
new_task.inputs['output_basename'] = new_task.id
new_task.save()
task_restart.write("\t" + new_task.id + "\n")
task_restart.flush()
task_restart.close()
###Output
_____no_output_____
###Markdown
Get Failed list
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="FAILED")
prefix = "GATK RNAseq SNV"
task_failed = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/TASK_RUN/failed.log", 'w')
task_failed.write("Task ID\tTask name\n")
for task in tasks:
if re.search(prefix, task.name):
task_failed.write(task.id + "\t" + task.name + "\n")
task_failed.close()
###Output
_____no_output_____
###Markdown
Remove outputs from failed and aborted tasks
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="ABORTED")
prefix = "GATK RNAseq SNV"
print("You sure remove outputs from failed tasks with prefix: " + prefix + "? Type \"YASS\" if so")
check = input()
if check == "YASS":
for task in tasks:
if re.search(prefix, task.name):
for key in task.outputs:
if task.outputs[key] is not None:
sys.stderr.write("Found files to remove from failed task: " + task.id + " " + task.name + "\n")
try:
if task.outputs[key].secondary_files is not None:
sys.stderr.write("Removing secondary files\n")
for i in range(0, len(task.outputs[key].secondary_files), 1):
task.outputs[key].secondary_files[i].delete()
except Exception as e:
sys.stderr.write(str(e) + "\nFile with key " + key + " probably does not have secondaryFiles, skipping\n")
try:
task.outputs[key].delete()
except Exception as e:
sys.stderr.write(str(e) + "\nFile with key " + key + " was probably deleted before, skipping\n")
sys.stderr.write("Finished processing " + task.id + "\n")
###Output
You sure remove outputs from failed tasks with prefix: GATK RNAseq SNV? Type "YASS" if so
YASS
###Markdown
Rename _\d_ files
###Code
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/manifests/to_rename-manifest.csv")
head = next(manifest)
print("You sure you want to rename the files in that manifest? Type \"YASS\" if so")
check = input()
if check == "YASS":
for line in manifest:
info = line.split(',')
cur = api.files.get(info[0])
new_name = cur.name[3:]
sys.stderr.write("Renaming file with ID " + cur.id + " " + cur.name + " to " + new_name + "\n")
cur.name = new_name
cur.save()
###Output
You sure you want to rename the files in that manifest? Type "YASS" if so
YASS
###Markdown
Load app into project
###Code
# need to have converted app to json first using rabix!
import json
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
f = open('/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/kfdrc_annoFuse_wf.json', 'r')
app_raw = f.read()
app = json.loads(app_raw)
app_id = "kfdrc-annofuse-wf"
# Create the Workflows
a_id = (project + "/" + app_id)
my_app_first = api.apps.install_app(id = a_id, raw = app)
###Output
_____no_output_____
###Markdown
Create example bash calls from workflowWorks best with non-restarted WF
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
task_id = "00210a5f-77ec-4d07-9b1d-c08e5497e24c"
out_file = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08-26_gatk4_rpt.tsv", "w")
# task_id = "3c20cc8e-18d7-43f2-bc2c-4a76d38a88f8"
task = api.tasks.get(task_id)
jobs = {}
temp = {}
for job in task.get_execution_details().jobs:
if job.status == "COMPLETED":
check = job.name.split('_')
cmd = job.command_line
if job.command_line == None:
# pdb.set_trace()
cmd = "embedded script or task retry"
sys.stderr.write("WARN: Job " + job.name + " had null cmd\n")
if check[-1] == "s":
key = "_".join(check[:-2])
if key not in temp:
jobs[job.start_time] = {}
jobs[job.start_time][key] = cmd
temp[key] = 1
else:
temp[key] += 1
else:
jobs[job.start_time] = {}
jobs[job.start_time][job.name] = cmd
out_file.write("Step\tType\tNum scatter\tCommand\n")
for rtime in sorted(jobs.keys()):
for key in jobs[rtime]:
rtype = "run step"
sct = "NA"
if key in temp and temp[key] > 1:
rtype = "scatter"
sct = str(temp[key])
cmds = jobs[rtime][key].split('\n')
for cmd in cmds:
out_file.write(key + "\t" + rtype + "\t" + sct + "\t" + cmd + "\n")
out_file.close()
###Output
_____no_output_____
###Markdown
Convert tsv to markdown table
###Code
import sys
# max desired col width
max_w = 200
tsv_in = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08-26_gatk4_rpt.tsv")
out_md = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08-26_gatk4_rpt.md", "w")
data = []
max_len = []
for line in tsv_in:
info = line.rstrip('\n').split('\t')
data.append(info)
if len(max_len) == 0:
for item in info:
max_len.append(len(item))
else:
for i in range(len(max_len)):
if len(info[i]) > max_w:
max_len[i] = max_w
elif len(info[i]) > max_len[i]:
max_len[i] = len(info[i])
# print header first
d_ct = []
for i in range(len(data[0])):
d_ct.append(len(data[0][i]))
out_md.write(" | " + data[0][i] + "".join([" "] * max_len[i]))
d_ct[i] += max_len[i]
out_md.write(" |\n")
for i in range(len(data[0])):
out_md.write(" | " + "".join(["-"] * d_ct[i]))
out_md.write(" |\n")
# pdb.set_trace()
for i in range(1, len(data), 1):
for j in range(len(data[i])):
d_ct = len(data[i][j]) + 2
out_md.write(" | " + data[i][j] + "".join([" "] * max_len[j]))
d_ct += max_len[j]
out_md.write(" |\n")
out_md.close()
###Output
_____no_output_____
###Markdown
Get run times Get run times by step
###Code
def get_job_run_time(task, phrase):
data = []
if re.search(phrase, task.name):
try:
for job in task.get_execution_details().jobs:
if job.status != "COMPLETED":
sys.stderr.write("Skipping job likely killed due to spot instance kill for " + job.name + " from task " + task.id + "\n")
else:
data.append([job.name, str((job.end_time-job.start_time).seconds/3600)])
# pdb.set_trace()
hold=1
return task.id, task.name, str(task.price.amount), str((task.end_time - task.start_time).seconds/3600), data
except Exception as e:
return [e, task.id]
else:
return []
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
phrase = "GATK RNAseq SNV RPT"
tasks = api.tasks.query(project=project, status="COMPLETED").all()
actual_out = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/cost_est/gatk4_rpt_cov-irt-actual_cost.txt", "w")
actual_out.write("Task name\tTask ID\tCost\tRun Time in hours\n")
step_run = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/cost_est/gatk4_rpt_cov-irt_step_run_times.txt", "w")
step_run.write("Run step\tRun time in hours\n")
# for task in tasks:
# result = get_job_run_time(task, phrase)
# if len(result) > 0:
# pdb.set_trace()
# actual_out.write("\t".join(result[0:4]) + "\n")
# for step in result[4]:
# step_run.write("\t".join(step) + "\n")
x = 1
m = 100
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(get_job_run_time, task, phrase): task for task in tasks}
for result in concurrent.futures.as_completed(results):
if len(result.result()) > 2:
if x % m == 0:
sys.stderr.write("Processed " + str(x) + " valid tasks\n")
actual_out.write("\t".join(result.result()[0:4]) + "\n")
for step in result.result()[4]:
step_run.write("\t".join(step) + "\n")
x += 1
elif len(result.result()) == 2:
sys.stderr.write(str(result.result()[0]) + "\tFailed processing task ID " + result.result()[1] + "\n")
exit(1)
actual_out.close()
step_run.close()
###Output
_____no_output_____
###Markdown
Tag source files
###Code
def tag_file(info, header):
try:
meta = info.rstrip('\n').split('\t')
f_obj = api.files.get(meta[0])
metadata = {}
for i in range(3, len(header), 1):
metadata[header[i]] = meta[i]
f_obj.metadata = metadata
f_obj.save()
except Exception as e:
sys.stderr.write(str(e) + "\n")
sys.stderr.write("Could not process " + info)
exit(1)
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/rsem_amnifest_to_tag.tsv")
head = next(manifest)
header = head.rstrip("\n").split("\t")
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(tag_file, line, header): line for line in manifest}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' files\n')
sys.stderr.flush()
###Output
_____no_output_____
###Markdown
Set up GATK4 RNAseq WF Tasks
###Code
def get_gatk_refs(api, project):
try:
ref_dict = {}
ref_dict['reference_fasta'] = api.files.get('5f185f0de4b09d9af8ae456e')
ref_dict['reference_dict'] = api.files.get('5f185f09e4b09d9af8ae4569')
known_sites = []
known_sites.append(api.files.get('5f161613e4b0efd84a0fd4b8'))
known_sites.append(api.files.get('5f1615e3e4b0efd84a0fd4a9'))
ref_dict['knownsites'] = known_sites
ref_dict['call_bed_file'] = api.files.get('5f186055e4b09d9af8ae4585')
ref_dict['dbsnp_vcf'] = api.files.get('5f161572e4b0efd84a0fd49f')
ref_dict['tool_name'] = 'STAR_GATK4'
except Exception as e:
sys.stderr.write(str(e) + "\nFailed to get REFS\n")
exit(1)
return ref_dict
def draft_task(in_file):
try:
input_dict = {}
for key in ref_obj:
input_dict[key] = ref_obj[key]
info = in_file.rstrip('\n').split('\t')
input_dict['STAR_sorted_genomic_bam'] = api.files.get(info[0])
task_name = "GATK RNAseq SNV RPT: " + info[3]
task = api.tasks.create(name=task_name, project=project, app=app_name, inputs=input_dict, run=False)
task.inputs['output_basename'] = task.id
task.save()
except Exception as e:
sys.stderr.write(str(e) + "\nfailed to set up task for " + in_file)
exit(1)
project = 'd3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study'
app_name = project + "/d3b-gatk-rnaseq-snv-wf"
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08-10_bam_list.tsv")
head = next(manifest)
ref_obj = get_gatk_refs(api, project)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(draft_task, line ): line for line in manifest}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' tasks\n')
sys.stderr.flush()
###Output
_____no_output_____
###Markdown
Run VEP
###Code
def get_vep_refs(api, project):
try:
ref_dict = {}
ref_dict['reference'] = api.files.get('5f185f0de4b09d9af8ae456e')
# un comment for using cache
# ref_dict['cache'] = api.files.get('5eed0f54e4b0efd899f4afda')
# ref_dict['merged_cache'] = True
ref_dict['bgzipped_gtf'] = api.files.get('5f3550c2e4b0efd8002a853b')
ref_dict['tool_name'] = 'STAR_GATK4'
except Exception as e:
sys.stderr.write(str(e) + "\nFailed to get REFS\n")
exit(1)
return ref_dict
def draft_vep_task(in_file):
try:
input_dict = {}
for key in ref_obj:
input_dict[key] = ref_obj[key]
info = in_file.rstrip('\n').split(',')
input_dict['input_vcf'] = api.files.get(info[0])
task_name = "VEP R100 GTF ANNOTATE RPT: " + info[sidx]
task = api.tasks.create(name=task_name, project=project, app=app_name, inputs=input_dict, run=False)
task.inputs['output_basename'] = task.id
task.save()
except Exception as e:
sys.stderr.write(str(e) + "\nfailed to set up task for " + in_file)
exit(1)
project = 'd3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study'
app_name = project + "/vep-1oo-annotate"
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08_manifests/vcf-manifest.csv")
head = next(manifest)
header = head.rstrip('\n').split(',')
sidx = header.index('sample_id')
ref_obj = get_vep_refs(api, project)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(draft_vep_task, line ): line for line in manifest}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' tasks\n')
sys.stderr.flush()
x += 1
###Output
_____no_output_____
###Markdown
Run STAR Fusion
###Code
def get_sf_refs(api, project):
try:
ref_dict = {}
ref_dict['genome_tar'] = api.files.get('5f19e9cee4b0a6d31720b606')
ref_dict['genome_untar_path'] = 'ctat_genome_lib_build_dir'
except Exception as e:
sys.stderr.write(str(e) + "\nFailed to get REFS\n")
exit(1)
return ref_dict
def draft_sf_task(in_file):
try:
input_dict = {}
for key in ref_obj:
input_dict[key] = ref_obj[key]
info = in_file.rstrip('\n').split(',')
input_dict['Chimeric_junction'] = api.files.get(info[0])
task_name = "STAR FUSION: " + info[sidx]
task = api.tasks.create(name=task_name, project=project, app=app_name, inputs=input_dict, run=False)
task.inputs['SampleID'] = task.id
task.save()
except Exception as e:
sys.stderr.write(str(e) + "\nfailed to set up task for " + in_file)
exit(1)
project = 'd3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study'
app_name = project + "/star-fusion-covirt"
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08_manifests/chim_junction-manifest.csv")
head = next(manifest)
header = head.rstrip('\n').split(',')
sidx = header.index('sample_id')
ref_obj = get_sf_refs(api, project)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(draft_sf_task, line ): line for line in manifest}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' tasks\n')
sys.stderr.flush()
x += 1
###Output
_____no_output_____
###Markdown
Run arriba
###Code
def get_arriba_refs(api, project):
try:
ref_dict = {}
ref_dict['reference_fasta'] = api.files.get('5f185f0de4b09d9af8ae456e')
ref_dict['gtf_anno'] = api.files.get('5f186055e4b09d9af8ae4585')
except Exception as e:
sys.stderr.write(str(e) + "\nFailed to get REFS\n")
exit(1)
return ref_dict
def draft_arriba_task(samp_id):
try:
input_dict = {}
for key in ref_obj:
input_dict[key] = ref_obj[key]
for key in inputs[samp_id]:
input_dict[key] = api.files.get(inputs[samp_id][key])
task_name = "ARRIBA FUSION: " + samp_id
task = api.tasks.create(name=task_name, project=project, app=app_name, inputs=input_dict, run=False)
task.inputs['outFileNamePrefix'] = task.id
task.save()
except Exception as e:
sys.stderr.write(str(e) + "\nfailed to set up task for " + in_file)
exit(1)
project = 'd3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study'
app_name = project + "/arriba-fusion"
inputs = {}
# process two manifests, chimeric_sam, genome bam + bai,
chim_sam = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08_manifests/chimeric_sam-manifest.csv")
head = next(chim_sam)
header = head.rstrip('\n').split(',')
sidx = header.index('sample_id')
for line in chim_sam:
info = line.rstrip('\n').split(',')
inputs[info[sidx]] = {}
inputs[info[sidx]]['chimeric_sam_out'] = info[0]
chim_sam.close()
ba_manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08_manifests/bam_bai.csv")
head = next(ba_manifest)
header = head.rstrip('\n').split(',')
sidx = header.index('sample_id')
for line in ba_manifest:
info = line.rstrip('\n').split(',')
suffix = info[1][-3:]
inputs[info[sidx]][('genome_aligned_' + suffix)] = info[0]
ba_manifest.close()
ref_obj = get_arriba_refs(api, project)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(draft_arriba_task, samp_id ): samp_id for samp_id in inputs}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' tasks\n')
sys.stderr.flush()
x += 1
# for samp_id in inputs:
# draft_arriba_task(samp_id)
###Output
_____no_output_____
###Markdown
Run annoFuse
###Code
def get_af_refs(api, project):
try:
ref_dict = {}
ref_dict['FusionGenome'] = api.files.get('5f19e9cee4b0a6d31720b606')
ref_dict['genome_untar_path'] = 'ctat_genome_lib_build_dir'
except Exception as e:
sys.stderr.write(str(e) + "\nFailed to get REFS\n")
exit(1)
return ref_dict
def draft_annofuse_task(samp_id):
try:
input_dict = {}
for key in ref_obj:
input_dict[key] = ref_obj[key]
for key in inputs[samp_id]:
input_dict[key] = api.files.get(inputs[samp_id][key])
input_dict['sample_name'] = samp_id
task_name = "annoFuse: " + samp_id
task = api.tasks.create(name=task_name, project=project, app=app_name, inputs=input_dict, run=False)
task.inputs['output_basename'] = task.id
task.save()
except Exception as e:
sys.stderr.write(str(e) + "\nfailed to set up task for " + in_file)
exit(1)
project = 'd3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study'
app_name = project + "/kfdrc-annofuse-wf"
inputs = {}
# process two manifests, fusion files, rsem,
rsem_files = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08-20_rsem_annfuse_manifest.tsv")
head = next(rsem_files)
header = head.rstrip('\n').split('\t')
sidx = header.index('sample_id')
for line in rsem_files:
info = line.rstrip('\n').split('\t')
inputs[info[sidx]] = {}
inputs[info[sidx]]['rsem_expr_file'] = info[0]
rsem_files.close()
fusions = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/2020-08-20_fusion_results-manifest.csv")
head = next(fusions)
header = head.rstrip('\n').split(',')
sidx = header.index('sample_id')
for line in fusions:
info = line.rstrip('\n').split(',')
key = 'arriba_output_file'
if re.search("STAR", info[1]):
key = 'star_fusion_output_file'
inputs[info[sidx]][key] = info[0]
fusions.close()
ref_obj = get_af_refs(api, project)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(draft_annofuse_task, samp_id ): samp_id for samp_id in inputs}
for result in concurrent.futures.as_completed(results):
if x % m == 0:
sys.stderr.write('Processed ' + str(x) + ' tasks\n')
sys.stderr.flush()
x += 1
###Output
_____no_output_____
###Markdown
Copy metadata to outputs
###Code
def add_metadata_to_outputs(task, phrase, in_key):
if re.search(phrase, task.name):
sys.stderr.write('Valid task found ' + task.name + '\n')
metadata = {}
for key in task.inputs[in_key].metadata:
metadata[key] = task.inputs[in_key].metadata[key]
for out_key in task.outputs:
# pdb.set_trace()
try:
if type(task.outputs[out_key]) is not list:
file_obj = api.files.get(task.outputs[out_key].id)
file_obj.metadata = metadata
file_obj.save()
try:
if task.outputs[out_key].secondary_files is not None:
file_obj = api.files.get(task.outputs[out_key].secondary_files[0].id)
file_obj.metadata = metadata
file_obj.save()
except Exception as e:
sys.stderr.write(str(e) + "\nError processing secondary file for " + out_key + " in " + task.id + " skipping\n")
else:
for output in task.outputs[out_key]:
if type(output) is not list:
file_obj = api.files.get(output.id)
file_obj.metadata = metadata
file_obj.save()
else:
for item in output:
if item is not None:
file_obj = api.files.get(item.id)
file_obj.metadata = metadata
file_obj.save()
except Exception as e:
print(e)
print("Skipping " + out_key + " for " + task.name + " due to error")
prefix = 'VEP R100 GTF ANNOTATE RPT'
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
key = 'input_vcf'
print("You sure you want to transfer input metadata to outputs with task prefix: " + prefix + "? Type \"YASS\" if so")
check = input()
if check == "YASS":
tasks = api.tasks.query(project=project, status="COMPLETED").all()
#for task in tasks:
# add_metadata_to_outputs(task, prefix, key)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(add_metadata_to_outputs, task, prefix, key ): task for task in tasks}
else:
sys.stderr.write("User did not type YASS, skipping\n")
###Output
_____no_output_____
###Markdown
Add tags to task outputs
###Code
def add_tags_to_outputs(task, phrase, tags):
if re.search(phrase, task.name):
sys.stderr.write('Valid task found ' + task.name + '\n')
for out_key in task.outputs:
# pdb.set_trace()
try:
if type(task.outputs[out_key]) is not list:
file_obj = api.files.get(task.outputs[out_key].id)
file_obj.tags = tags
file_obj.save()
try:
if task.outputs[out_key].secondary_files is not None:
file_obj = api.files.get(task.outputs[out_key].secondary_files[0].id)
file_obj.tags = tags
file_obj.save()
except Exception as e:
sys.stderr.write(str(e) + "\nError processing secondary file for " + out_key + " in " + task.id + " skipping\n")
else:
for output in task.outputs[out_key]:
if type(output) is not list:
file_obj = api.files.get(output.id)
file_obj.tags = tags
file_obj.save()
else:
for item in output:
if item is not None:
file_obj = api.files.get(item.id)
file_obj.tags = tags
file_obj.save()
except Exception as e:
print(e)
print("Skipping " + out_key + " for " + task.name + " due to error")
prefix = 'VEP R100 GTF ANNOTATE'
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tags = ['GATK4', 'VEP', 'R100', 'GTF-ANNOTATED']
print("You sure tag outputs with task prefix: " + prefix + "? Type \"YASS\" if so")
check = input()
if check == "YASS":
tasks = api.tasks.query(project=project, status="COMPLETED").all()
#for task in tasks:
# add_metadata_to_outputs(task, prefix, key)
x = 1
m = 250
with concurrent.futures.ThreadPoolExecutor(16) as executor:
results = {executor.submit(add_tags_to_outputs, task, prefix, tags ): task for task in tasks}
else:
sys.stderr.write("User did not type YASS, skipping\n")
###Output
_____no_output_____
###Markdown
Get task outputs
###Code
def write_to_manifest(out_fh, file_obj, out_key, task_name):
out_fh.write(",".join([file_obj.id, file_obj.name, out_key, task_name]) + "\n")
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="COMPLETED").all()
out = open('/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/manifests/test_vep_out.txt', 'w')
out.write("id,name,output_category,task_name\n")
phrase = "VEP R100 ANNOTATE: COVHA-20200311-P1-A01-CP"
for task in tasks:
if re.search(phrase, task.name):
sys.stderr.write('Processing task: ' + task.name + "\n")
for out_key in task.outputs:
if type(task.outputs[out_key]) is not list:
file_obj = task.outputs[out_key]
write_to_manifest(out, file_obj, out_key, task.name)
if task.outputs[out_key].secondary_files is not None:
write_to_manifest(out, task.outputs[out_key].secondary_files[0], out_key, task.name)
else:
for i in range(len(task.outputs[out_key])):
if type(task.outputs[out_key][i]) is not list:
write_to_manifest(out, task.outputs[out_key][i], out_key, task.name)
if task.outputs[out_key][i].secondary_files is not None:
write_to_manifest(out, task.outputs[out_key][i].secondary_files[0], out_key, task.name)
else:
for j in range(len(task.outputs[out_key][i])):
if task.outputs[out_key][i][j] is not None:
write_to_manifest(out, task.outputs[out_key][i][j], out_key, task.name)
if task.outputs[out_key][i][j].secondary_files is not None:
write_to_manifest(out, task.outputs[out_key][i][j].secondary_files[0], out_key, task.name)
out.close()
###Output
_____no_output_____
###Markdown
Abort unresponsive and restart tasks
###Code
import datetime
import pytz
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="RUNNING")
current = datetime.datetime.now()
tz = pytz.timezone('America/New_York')
prefix = "GATK RNAseq SNV"
task_abort = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/TASK_RUN/aborted_and_restarted2.log", 'w')
task_abort.write("Task ID\tTask name\tNew Task ID")
for task in tasks:
if re.search(prefix, task.name):
for job in task.get_execution_details().jobs:
if job.name == "preprocess_rnaseq_bam_sambamba_md_sorted":
if job.status == "RUNNING":
diff = (current-pytz.utc.localize(job.start_time, is_dst=None).astimezone(tz).replace(tzinfo=None)).seconds/3600
if diff > 2:
task_abort.write(task.id + "\t" + task.name)
in_dict = {}
sys.stderr.write("Aborting " + task.id + "\t" + task.name + "\n" )
task.abort()
new_task = task.clone(run=False)
new_task.inputs['output_basename'] = new_task.id
new_task.save()
task_abort.write("\t" + new_task.id + "\n")
task_abort.flush()
else:
break
else:
break
task_abort.close()
###Output
_____no_output_____
###Markdown
Restart failed tasks
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="FAILED")
prefix = "GATK RNAseq SNV"
task_restart = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/TASK_RUN/failed_and_restarted2.log", 'w')
task_restart.write("Task ID\tTask name\tNew Task ID\n")
run_list = ["GATK RNAseq SNV: COVHA-20200315-P9-F02-N",
"GATK RNAseq SNV: COVHA-20200316-P12-F01-P","GATK RNAseq SNV: COVHA-20200403-P1-C06-P"]
for task in tasks:
if re.search(prefix, task.name) and task.name in run_list:
task_restart.write(task.id + "\t" + task.name)
new_task = task.clone(run=False)
new_task.inputs['output_basename'] = new_task.id
new_task.save()
task_restart.write("\t" + new_task.id + "\n")
task_restart.flush()
task_restart.close()
###Output
_____no_output_____
###Markdown
Get Failed list
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="FAILED")
prefix = "GATK RNAseq SNV"
task_failed = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/TASK_RUN/failed.log", 'w')
task_failed.write("Task ID\tTask name\n")
for task in tasks:
if re.search(prefix, task.name):
task_failed.write(task.id + "\t" + task.name + "\n")
task_failed.close()
###Output
_____no_output_____
###Markdown
Remove outputs from failed and aborted tasks
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="ABORTED")
prefix = "GATK RNAseq SNV"
print("You sure remove outputs from failed tasks with prefix: " + prefix + "? Type \"YASS\" if so")
check = input()
if check == "YASS":
for task in tasks:
if re.search(prefix, task.name):
for key in task.outputs:
if task.outputs[key] is not None:
sys.stderr.write("Found files to remove from failed task: " + task.id + " " + task.name + "\n")
try:
if task.outputs[key].secondary_files is not None:
sys.stderr.write("Removing secondary files\n")
for i in range(0, len(task.outputs[key].secondary_files), 1):
task.outputs[key].secondary_files[i].delete()
except Exception as e:
sys.stderr.write(str(e) + "\nFile with key " + key + " probably does not have secondaryFiles, skipping\n")
try:
task.outputs[key].delete()
except Exception as e:
sys.stderr.write(str(e) + "\nFile with key " + key + " was probably deleted before, skipping\n")
sys.stderr.write("Finished processing " + task.id + "\n")
###Output
_____no_output_____
###Markdown
Rename _\d_ files
###Code
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/to_rename.txt")
head = next(manifest)
print("You sure you want to rename the files in that manifest? Type \"YASS\" if so")
check = input()
sep = "\t"
if check == "YASS":
for line in manifest:
info = line.split(sep)
cur = api.files.get(info[0])
new_name = cur.name[3:]
sys.stderr.write("Renaming file with ID " + cur.id + " " + cur.name + " to " + new_name + "\n")
cur.name = new_name
cur.save()
###Output
_____no_output_____
###Markdown
Tag outputs by task seq id
###Code
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
tasks = api.tasks.query(project=project, status="COMPLETED").all()
manifest = open('/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/manifests/2020-06-23_UPDATED_HUMAN_MANIFEST.txt')
head = next(manifest)
header = head.rstrip('\n').split("\t")
phrase = "GATK RNAseq SNV RPT"
meta_dict = {}
for entry in manifest:
info = entry.rstrip('\n').split('\t')
meta_dict[info[0]] = {}
for i in range(0, len(header), 1):
meta_dict[info[0]][header[i]] = info[i]
manifest.close()
for task in tasks:
if re.search(phrase, task.name):
sys.stderr.write('Processing task: ' + task.name + "\n")
parts = task.name.split()
# Mason lab changed dashes to underscore
seq_id = parts[-1].replace('-','_' )
if seq_id in meta_dict:
for out_key in task.outputs:
if type(task.outputs[out_key]) is not list:
try:
file_obj = api.files.get(task.outputs[out_key].id)
file_obj.metadata = meta_dict[seq_id]
file_obj.save()
except Exception as e:
sys.stderr.write(str(e) + "\n" + file_obj.name + " probably already tagged, skipping\n" )
try:
if task.outputs[out_key].secondary_files is not None:
file_obj = api.files.get(task.outputs[out_key].secondary_files[0].id)
file_obj.metadata = meta_dict[seq_id]
file_obj.save()
except Exception as e:
sys.stderr.write(str(e) + "\nError processing secondary file for " + out_key + " in " + task.id + " skipping\n")
else:
sys.stderr.write("Not in manifest: " + task.name + " " + task.id + "\n")
###Output
_____no_output_____
###Markdown
Delete files by name
###Code
manifest = open("/Users/brownm28/Documents/2020-Apr-8_RNAseq_snv_dev/delme_files.txt")
project = "d3b-bixu/rs-vpf5jbc3-cov-irt-controlled-access-study"
head = next(manifest)
print("You sure you want to delete the files in that manifest? Type \"YASS\" if so")
check = input()
max_j = 25
ct = 0
found = 0
fnames = []
if check == "YASS":
for line in manifest:
fnames.append(line.rstrip('\n'))
ct +=1
sys.stderr.write("Searching for " + str(ct) + " files to delete\n")
total = len(fnames)
for i in range(0, total, max_j):
uset = i + max_j
if uset > total:
uset = total
flist = api.files.query(project=project, names=fnames[i:uset])
for fobj in flist:
sys.stderr.write("Deleting " + fobj.name + " with ID " + fobj.id)
fobj.delete()
found += 1
sys.stderr.write("Deleted " + str(found) + " files\n")
###Output
_____no_output_____ |
examples/user_guide/Specifying_Meshes.ipynb | ###Markdown
This notebook demonstrates one way to use the Bokeh/HoloViews [Drawing Tools](Drawing_Tools.ipynb) and the EarthSim [Annotators](Annotators.ipynb) to define polygons and refine points to specify how to generate a ``FiligreeMesh`` irregular triangular grid covering an area of a map. This mesh can then be used as an input to a simulator that will use the indicated level of detail in each region of a map.
###Code
import panel as pn
import holoviews as hv
import geoviews as gv
import cartopy.crs as ccrs
from earthsim.annotators import PolyAndPointAnnotator
from earthsim.filigree import FiligreeMesh, FiligreeMeshDashboard
hv.extension('bokeh')
%opts Polygons (color='red' alpha=0.5 selection_alpha=0.8 nonselection_alpha=0.2)
%opts Points (size=10 nonselection_alpha=0.5) [tools=['hover']] RGB [width=900 height=600]
###Output
_____no_output_____
###Markdown
Simple workflow1. Edit the existing polygon or delete it and draw one or more polygons of your own2. Draw one or more refine points within this region, adding a numeric size for each one by editing the 'Size' column in the subsequent table.
###Code
bounds = (-10130073.550868405, 3789592.5934560597, -10107809.875348726, 3815932.0009413)
annot = PolyAndPointAnnotator(polys=[hv.Bounds(bounds)])
annot.panel()
###Output
_____no_output_____
###Markdown
The ``FiligreeMesh`` class accepts a ``GeoAnnotator`` and adds the polygons and refine points drawn using it to an underlying filigree.FiligreeMesh. Once the polygons and points are added we can create a constant size function and declare the mesh size and then run and view the resultant mesh:
###Code
mesh = FiligreeMesh(draw_helper=annot)
mesh.mesh.create_constant_size_function(500, 5)
mesh.mesh.set_outside_mesh_size(500)
mesh.view()
###Output
_____no_output_____
###Markdown
Here sizes should be in meters. Note that as of this writing, if you select size values that, when combined with the location of your point, extend beyond the boundaries of the polygon, Filigree will ignore that point, which can be confusing. DashboardInstead of splitting the above workflow across two notebook cells, we can instead organize it as a single plot, which computes the mesh whenever we press a button.
###Code
annot = PolyAndPointAnnotator(extent=(-110, 42, -109, 43))
dashboard = FiligreeMeshDashboard(draw_helper=annot)
dashboard.mesh.create_constant_size_function(500, 5)
dashboard.mesh.set_outside_mesh_size(500)
dashboard.panel()
###Output
_____no_output_____
###Markdown
This notebook demonstrates one way to use the Bokeh/HoloViews [Drawing Tools](Drawing_Tools.ipynb) and the EarthSim [Annotators](Annotators.ipynb) to define polygons and refine points to specify how to generate a ``FiligreeMesh`` irregular triangular grid covering an area of a map. This mesh can then be used as an input to a simulator that will use the indicated level of detail in each region of a map.
###Code
import holoviews as hv
import geoviews as gv
import cartopy.crs as ccrs
import parambokeh
from earthsim.annotators import PolyAndPointAnnotator
from earthsim.filigree import FiligreeMesh, FiligreeMeshDashboard
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Simple workflow1. Edit the existing polygon or delete it and draw one or more polygons of your own2. Draw one or more refine points within this region, adding a numeric size for each one by editing the 'Size' column in the subsequent table.
###Code
%%opts Polygons (color='red' alpha=0.5 selection_alpha=0.8 nonselection_alpha=0.2)
%%opts Points (size=10 nonselection_alpha=0.5)
bounds = (-10130073.550868405, 3789592.5934560597, -10107809.875348726, 3815932.0009413)
annot = PolyAndPointAnnotator(polys=[hv.Bounds(bounds)])
annot.view()
###Output
_____no_output_____
###Markdown
The ``FiligreeMesh`` class accepts a ``GeoAnnotator`` and adds the polygons and refine points drawn using it to an underlying filigree.FiligreeMesh. Once the polygons and points are added we can create a constant size function and declare the mesh size and then run and view the resultant mesh:
###Code
%%opts RGB [width=900 height=600]
%%opts Points (size=10 color='blue') [tools=['hover']]
mesh = FiligreeMesh(draw_helper=annot)
mesh.mesh.create_constant_size_function(500, 5)
mesh.mesh.set_outside_mesh_size(500)
mesh.view()
###Output
_____no_output_____
###Markdown
Here sizes should be in meters. Note that as of this writing, if you select size values that, when combined with the location of your point, extend beyond the boundaries of the polygon, Filigree will ignore that point, which can be confusing. DashboardInstead of splitting the above workflow across two notebook cells, we can instead organize it as a single plot, which computes the mesh whenever we press a button.
###Code
%%opts Polygons (color='red' alpha=0.5 selection_alpha=0.8 nonselection_alpha=0.2)
%%opts Points (size=10 nonselection_alpha=0.5)
annot = PolyAndPointAnnotator()
dashboard = FiligreeMeshDashboard(draw_helper=annot)
dashboard.mesh.create_constant_size_function(500, 5)
dashboard.mesh.set_outside_mesh_size(500)
parambokeh.Widgets(dashboard)
dashboard.view()
###Output
_____no_output_____
###Markdown
This notebook demonstrates one way to use the Bokeh/HoloViews [Drawing Tools](Drawing_Tools.ipynb) and the EarthSim [Annotators](Annotators.ipynb) to define polygons and refine points to specify how to generate a ``FiligreeMesh`` irregular triangular grid covering an area of a map. This mesh can then be used as an input to a simulator that will use the indicated level of detail in each region of a map.
###Code
import panel as pn
import holoviews as hv
import geoviews as gv
import cartopy.crs as ccrs
from earthsim.annotators import PolyAndPointAnnotator
from earthsim.filigree import FiligreeMesh, FiligreeMeshDashboard
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Simple workflow1. Edit the existing polygon or delete it and draw one or more polygons of your own2. Draw one or more refine points within this region, adding a numeric size for each one by editing the 'Size' column in the subsequent table.
###Code
%%opts Polygons (color='red' alpha=0.5 selection_alpha=0.8 nonselection_alpha=0.2)
%%opts Points (size=10 nonselection_alpha=0.5)
bounds = (-10130073.550868405, 3789592.5934560597, -10107809.875348726, 3815932.0009413)
annot = PolyAndPointAnnotator(polys=[hv.Bounds(bounds)])
annot.view()
###Output
_____no_output_____
###Markdown
The ``FiligreeMesh`` class accepts a ``GeoAnnotator`` and adds the polygons and refine points drawn using it to an underlying filigree.FiligreeMesh. Once the polygons and points are added we can create a constant size function and declare the mesh size and then run and view the resultant mesh:
###Code
%%opts RGB [width=900 height=600]
%%opts Points (size=10 color='blue') [tools=['hover']]
mesh = FiligreeMesh(draw_helper=annot)
mesh.mesh.create_constant_size_function(500, 5)
mesh.mesh.set_outside_mesh_size(500)
mesh.view()
###Output
_____no_output_____
###Markdown
Here sizes should be in meters. Note that as of this writing, if you select size values that, when combined with the location of your point, extend beyond the boundaries of the polygon, Filigree will ignore that point, which can be confusing. DashboardInstead of splitting the above workflow across two notebook cells, we can instead organize it as a single plot, which computes the mesh whenever we press a button.
###Code
%%opts Polygons (color='red' alpha=0.5 selection_alpha=0.8 nonselection_alpha=0.2)
%%opts Points (size=10 nonselection_alpha=0.5)
annot = PolyAndPointAnnotator()
dashboard = FiligreeMeshDashboard(draw_helper=annot)
dashboard.mesh.create_constant_size_function(500, 5)
dashboard.mesh.set_outside_mesh_size(500)
pn.Row(dashboard.param, dashboard.view())
###Output
_____no_output_____ |
notebooks/Temperature Conversion.ipynb | ###Markdown
Fahrenheit to Celsius===========
###Code
Fahrenheit = 32.0
Celsius = (Fahrenheit - 32) * 5.0/9.0
print("Temperature: {F} Fahrenheit = {C} Celsius".format(F=Fahrenheit, C=Celsius))
###Output
Temperature: 32.0 Fahrenheit = 0.0 Celsius
###Markdown
Celsius to Fahrenheit===========
###Code
Celsius = 100.0
Fahrenheit = 9.0/5.0 * Celsius + 32
print("Temperature: {C} Celsius = {F} Fahrenheit".format(F=Fahrenheit, C=Celsius))
###Output
Temperature: 100.0 Celsius = 212.0 Fahrenheit
###Markdown
Plot Example=======
###Code
%matplotlib inline
import matplotlib.pyplot as plt
def C2F(C):
return 9.0/5.0 * C + 32
C2F(100)
x = [C2F(c) for c in range(101)]
x[0:10]
plt.title("Temperature Conversion")
plt.xlabel("Celsius")
plt.ylabel("Fahrenheit")
plt.plot(x)
###Output
_____no_output_____ |
Challenge_September_2020.ipynb | ###Markdown
Ingham Medical Physics Coding Challenge - September 2020This Jupyter notebook describes the coding challenge for the Radiotherapy Computer Scientist position within the Ingham Institute Medical Physics Group hiring in September 2020. The goal of this challenge is to train a model to predict outcomes for cancer patients and present the results. DataThis task makes use of data obtained from The Cancer Imaging Archive: Head-Neck-Radiomics-HN1 (https://wiki.cancerimagingarchive.net/display/Public/Head-Neck-Radiomics-HN1) which is available under the Attribution-NonCommercial 3.0 Unported licence. This dataset includes clinical data and computed tomography (CT) from 137 head and neck squamous cell carcinoma (HNSCC) patients treated with radiotherapy. Structures within the CT images have also been manually delineated by an experienced radiation oncologist.Two CSV files provided alongside this notebook in the **data** directory: HN_ClinicalData.csvThis sheet contains the clinical data of the patients included within the Head-Neck-Radiomics-HN1 dataset. It provides information such as the patient's age, stage of disease and various outcomes. Additionally, these patients have also been randomly split into a **train** and **test** set (see the dataset column). HN_Radiomics.csvRadiomic features have been generated using the patient's image data available in the Head-Neck-Radiomics-HN1 dataset. The **pyradiomics** library was used to extract first-order and shape features from the patients CT scan. Features are computed per structure (region of interest).A structure of particular significance for radiotherapy is the Gross Tumour Volume (GTV). This describes the position and extent of the tissue identified as tumour (See https://www.ncbi.nlm.nih.gov/pmc/articles/PMC1434601/ for more information). Note that patients may have more than one GTV, therefore these are named using GTV-1, GTV-2,... GTV-*n* where *n* is the number of tumour volumes for that patient. TaskUsing the data found in the two CSV files, train a model which can predict an outcome for a patient. A common outcome to predict would be the overall survival for the patient (can be found in the column *overall_survival_in_days* within the clinical data). Some different outcomes are also available within this clinical data such as the *recurrence_metastatic_free_survival_in_days*, *local_recurrence_in_days* and *distant_metastases_in_days*.Make use of the clinical data and radiomic features to attempt to predict these outcomes. Hint: The GTV will probably be the most useful structure to help you predict this since this describes the cancerous tissue. Since multiple GTV's are available for many patients, you will need think about a good way to combine these rows for those patients. There are also many radiomic features available, think about selecting a subset of these to train your model which you think might be useful to predict a particular outcome for a patient.Train the model using the patients in the **train** dataset (dataset column in the clinical data). Then test your model using the patients in the **test** dataset. Think about different algorithms you might want to try for your model. Doing a regression to predict the outcome might be difficult to get good results, so you could try assigning patients to a "good" or "bad" outcome class and turn this into a classification problem.Finally, generate one or more plots which show how well your model is performing to predict a certain outcome. NoteThe aim of this challenge is not to build a model with excellent results, so don't worry if your model isn't performing all that well. This is a cutting-edge topic of active research and is not easy to solve. What we want to see is how you approach a problem like this, how you present your results and your overall coding style. SubmissionIn this Jupyter notebook some Python code is provided to get you started with the challenge. The libraries you'll need are defined in the accompanying *requirements.txt* file. To complete the challenge, you can extend this notebook with your code. If you prefer, you can provide your solution in a separate file (or files) as well.If you would prefer to complete this task in a different programming language, no problem! Feel free to use R, MATLAB or anything else you feel is appropriate.The suggested way to submit your result to this challenge is to fork this GitHub repository and commit your results to your fork. Once complete just send us a link ([email protected]) to your forked repository. This will mean your submission is publicly visible. If you would prefer to keep your submission private, this is also no problem. You will just need to duplicate this repository (https://docs.github.com/en/github/creating-cloning-and-archiving-repositories/duplicating-a-repository), then add **@pchlap** as a user to your private repository so that we can see your results.**Due Date:** September 30th @ 11.59pm AEST.If you have any questions, post them as an issue on this GitHub repository or directly email [email protected]. Resources - **pyradiomics** features: https://pyradiomics.readthedocs.io/en/latest/features.html - **pandas**: https://pandas.pydata.org/docs/ - **scikit-learn**: https://scikit-learn.org/stable/user_guide.html - **seaborn**: https://seaborn.pydata.org/index.html Good luck!
###Code
from pathlib import Path
# Define paths to our data
data_path = Path("data")
radiomics_path = data_path.joinpath("HN_Radiomics.csv")
clinical_data_path = data_path.joinpath("HN_ClinicalData.csv")
import pandas as pd
# Load the data
df_clinical_data = pd.read_csv(clinical_data_path)
df_radiomics = pd.read_csv(radiomics_path)
###Output
_____no_output_____
###Markdown
Extract and combine specific featuresThis cell demonstrates how you might extract radiomic features (VoxelVolume and SurfaceArea) for all GTVs. Since there can be multiple GTVs per patient, these are combined by summing the values for each patient here.You'll probably want to extend this to extract more features. Think about how you would combine other features, in other cases computing the mean value might be more appropriate or perhaps you don't want to combine them at all?Also, take a look at what else is available in the clinical data, perhaps you'd like to use some of these features as well (patient age or cancer stage).
###Code
df_gtv_radiomics = df_radiomics[df_radiomics["Structure"].str.startswith("GTV")]
df_gtv_radiomics = df_gtv_radiomics.groupby("id")[["VoxelVolume", "SurfaceArea"]].sum()
# TODO: Extract more/different features
###Output
_____no_output_____
###Markdown
Merge feature(s) with clinical dataThis cell combines the feature with the clinical data in a DataFrame.
###Code
df = df_clinical_data.merge(df_gtv_radiomics, on="id")
###Output
_____no_output_____
###Markdown
Plot our dataHere we plot the features we just extracted against the patient outcome (overall survival in days).
###Code
import seaborn as sns
pair_grid = sns.PairGrid(df, y_vars=["overall_survival_in_days"], x_vars=["VoxelVolume", "SurfaceArea"], height=6, hue="dataset")
ax = pair_grid.map(sns.scatterplot)
ax = pair_grid.add_legend()
###Output
_____no_output_____
###Markdown
Fit your modelUsing the data you have prepared above, fit a model to see if you can predict the outcome of the patients. If you're not sure where to start, try using a linear regression...Regression not working well? Try turning this into a classification problem and see if you can instead predict a "good" or a "bad" outcome.Experiment with different algorithms for your model. There are many available in the sklearn library, but feel free to use something different if you prefer.
###Code
from sklearn.linear_model import LinearRegression
X_train = df[df["dataset"]=="train"][["VoxelVolume", "SurfaceArea"]]
X_test = df[df["dataset"]=="test"][["VoxelVolume", "SurfaceArea"]]
y_train = df[df["dataset"]=="train"]["overall_survival_in_days"]
y_test = df[df["dataset"]=="test"]["overall_survival_in_days"]
# TODO: Fit model...
###Output
_____no_output_____
###Markdown
Plot ResultsVisualize the performance of your model with some plots. Try to be creative and think about some unique ways to allow others to explore your results.
###Code
# TODO: Plot results...
###Output
_____no_output_____ |
dev_setups_postgres.ipynb | ###Markdown
Dev Setups -- Connecting Python and SQLThe purpose of this Jupyter notebook is to demonstrate the usefulness of connecting python to a relational database by using a python toolkit called SQLAlchemy.***Note! The commands below were written for Python 2. Small adjustments will need to be made to some (i.e. Print statements) in Python 3.*** ***First off, what is a relational database?***Basically, it is a way to store data such that information can be retrieved from it.MySQL and PostgreSQL are examples of relational databases. For the purposes of an Insight project, you can use either one.Why would you use a relational database instead of a csv or two?**A few reasons:**- They scale easily- They are easy to query- It’s possible to do transactions in those cases where you need to write to a database, not just read from it- Everyone in industry uses them, so you should get familiar with them, too. ***What does a relational database look like? *****Let's setup PostgreSQL**We can take a look. First we need to set up a few things. The first thing we want to do is to get a PostgreSQL server up and running. Go to http://postgresapp.com/ and follow the three steps listed in the Quick Installation Guide. (If you aren't running a Mac, you can download PostgreSQL at http://www.postgresql.org/) -- you can also use homebrew, but your path will change below -- **If you're on a mac, you might need to add psql to PATH**:**Edit your .bash_profile in your home directory. Since you already installed Anaconda, it should look something like:**```export PATH="/Users/YOUR_USER_NAME/anaconda/bin:$PATH"```**Right below the line added by anaconda you can add this line:**```export PATH="/Applications/Postgres.app/Contents/Versions/latest/bin:$PATH"```**Save and reload the bash profile**```$ source .bash_profile```**The only user right now for PSQL is 'postgres', you can make your database and enter it with that username**```$ createdb birth_db -U postgres``````$ psql birth_db```**If you want to make a new user for this database you can make one now. Note: username in the below line must match your Mac/Linux username:**```CREATE USER username SUPERUSER PASSWORD 'yourpassword'```**Exit out of PSQL (\q) and test logging in through this user:**```$ psql birth_db -h localhost -U username``````$ \c ``` (once in PSQL to check how you're logged in)We'll come back to PostgreSQL in a moment. First, we'll set up SQLAlchemy. To get started we need to install two packages into the environment that might not be installed. Run the cell below or enter the commands (without !) into the command line. Note that if you did an Anaconda installation, sqlalchemy_utils is only available through pip, and if you didn't install pip into your environment (dev_setups_conda-part1.html) you will run into problems. Also, you need to install psycopg2 using conda, otherwise you will probably run into different problems. If you mainly installed packages using pip, change the next commands to reflect that. In jupyter you can run code in the command line with the "!" special character as you'll see in the next cell. We do this here for ease but it's generally considered poor practice.
###Code
!pip install sqlalchemy_utils
!conda install psycopg2 -y
## Python packages - you may have to pip install sqlalchemy, sqlalchemy_utils, and psycopg2.
from sqlalchemy import create_engine
from sqlalchemy_utils import database_exists, create_database
import psycopg2
import pandas as pd
###Output
_____no_output_____
###Markdown
(Optional) If Postgres isn't launched on startup **To have launchd start postgresql at login: **```ln -sfv /usr/local/opt/postgresql/*.plist ~/Library/LaunchAgents``` **Then to load postgresql now: **```launchctl load ~/Library/LaunchAgents/homebrew.mxcl.postgresql.plist``` **Or, if you don't want/need launchctl, you can just run: **```postgres -D /usr/local/var/postgres``` **into the command line and also look at [this page](http://postgresguide.com/) for more details.** Interfacing with PSQL through pythonUpdate your username and password in the cell below. Then run each cell.
###Code
#In Python: Define your username and password used above. I've defined the database name (we're
#using a dataset on births, so I call it birth_db).
dbname = 'birth_db'
username = 'brittany'
pswd = '1test2'
## 'engine' is a connection to a database
## Here, we're using postgres, but sqlalchemy can connect to other things too.
engine = create_engine('postgresql://%s:%s@localhost/%s'%(username,pswd,dbname))
print('postgresql://%s:%s@localhost/%s'%(username,pswd,dbname))
print(engine.url)
# Replace localhost with IP address if accessing a remote server
## create a database (if it doesn't exist)
if not database_exists(engine.url):
create_database(engine.url)
print(database_exists(engine.url))
print(engine.url)
###Output
True
postgresql://brittany:1test2@localhost/birth_db
###Markdown
Getting some data Time to get some data, head over to https://drive.google.com/open?id=1YlN9vG2qY1DdtC9ni4ItPhoYm7GHTNfu and download the births2012_downsampled.csv.
###Code
# load a database from the included CSV
# edit the string below to account for where you saved the csv.
csv_path = 'births2012_downsampled.csv'
birth_data = pd.read_csv(csv_path)
## insert data into database from Python (proof of concept - this won't be useful for big data, of course)
## df is any pandas dataframe
birth_data.to_sql('birth_data_table', engine, if_exists='replace')
###Output
_____no_output_____
###Markdown
The above line (to_sql) is doing a lot of heavy lifting. It's reading a dataframe, it's creating a table, and adding the data to the table. So ** SQLAlchemy is quite useful! ** How this works outside of python:** open up the PostgreSQL app, click on the "Open psql" button in the bottom right corner, ** or alternatively type ```$ psql birth_db -h localhost -U username``` into the command line **Type the following into the terminal that opens up**`$ \c birth_db`**You should see something like the following**`$ You are now connected to database "birth_db" as user "username".`**Then try the following query:**`$ SELECT * FROM birth_data_table;` You can see the table we created! But it's kinda ugly and hard to read (type 'q' in terminal to end long output). **You can try a few other sample queries. Before you type in each one, ask yourself what you think the output will look like:**`SELECT * FROM birth_data_table WHERE infant_sex='M';``SELECT COUNT(infant_sex) FROM birth_data_table WHERE infant_sex='M';``SELECT COUNT(gestation_weeks), infant_sex FROM birth_data_table WHERE infant_sex = 'M' GROUP BY gestation_weeks, infant_sex;``SELECT gestation_weeks, COUNT(gestation_weeks) FROM birth_data_table WHERE infant_sex = 'M' GROUP BY gestation_weeks;`
###Code
## Now try the same queries, but in python!
# connect:
con = None
con = psycopg2.connect(database = dbname, user = username, host='localhost', password=pswd)
# query:
sql_query = """
SELECT * FROM birth_data_table WHERE delivery_method='Cesarean';
"""
birth_data_from_sql = pd.read_sql_query(sql_query,con)
birth_data_from_sql.head()
###Output
_____no_output_____
###Markdown
Is one method of querying the data faster than the other? Probably not for the amount of data you can fit on your machine.
###Code
import time
t0 = time.time()
birth_data_from_sql = pd.read_sql_query(sql_query,con)
t1 = time.time()
total = t1-t0
print('total time take: ' + str(total) + ' seconds')
birth_data = pd.read_csv(csv_path)
t0 = time.time()
birth_data=birth_data.loc[(birth_data['delivery_method'] == 'Cesarean')]
t1 = time.time()
total = t1-t0
print('total time take: ' + str(total) + ' seconds')
###Output
total time take: 0.0013017654418945312 seconds
|
python/organization/twitter.ipynb | ###Markdown
Organization Activity on TwitterThe parameters in the cell below can be adjusted to explore other politicians and time frames. How to explore other organizations?The ***organization_id*** is an internal identifier that connects the different social media accounts. You can [use this other notebook](../organizations.ipynb?autorun=true) to get other the identifiers of other politicians.***Alternatively***, you can direcly use the [organizations API](http://mediamonitoring.gesis.org/api/organizations/swagger/), or access it with the [SMM Wrapper](https://pypi.org/project/smm-wrapper/). A. Set Up parameters
###Code
# Parameters:
organization_id = 440
from_date = '2017-09-01'
to_date = '2018-12-31'
aggregation = 'week'
###Output
_____no_output_____
###Markdown
B. Using the SMM Organization API
###Code
# create an instance of the smm wrapper
from smm_wrapper import SMMOrganizations
smm = SMMOrganizations()
# using the api to get the tweets and replies
tweets = smm.api.tweets_by(_id=organization_id, from_date=from_date, to_date=to_date, aggregate_by=aggregation)
replies = smm.api.replies_to(_id=organization_id, from_date=from_date, to_date=to_date, aggregate_by=aggregation)
###Output
_____no_output_____
###Markdown
C. Plotting
###Code
import plotly
from plotly import graph_objs as go
plotly.offline.init_notebook_mode(connected=True)
plotly.offline.iplot({
"data": [go.Scatter(x=tweets['labels'], y=tweets['values'], name='Tweets', line_shape='spline'),
go.Scatter(x=replies['labels'], y=replies['values'], name='Replies', line_shape='spline')],
"layout": go.Layout(title='Tweets and replies', yaxis=dict(title='N'))
})
###Output
_____no_output_____ |
notebooks/exploratory/kernel-one-hot-encoding.ipynb | ###Markdown
One-Hot Kernel EncodingBased on [Structured Variationally Auto-encoded Optimization (Lu et al., 2018)](http://proceedings.mlr.press/v80/lu18c/lu18c.pdf)Suppose we have a set of base kernels $\mathcal{B} = \{A, B, C\}$ and a set of operations $\mathcal{O} = \{+, \times, Stop\}$$\hat{B} = \{A_1, A_2, ..., A_D, B_1, B_2, ..., B_D, C_1, C_2, ..., C_D\}$\begin{bmatrix} A_1 & B_1 & C_1 \\A_2 & B_2 & C_2 \\\vdots & \vdots & \vdots \\A_D & B_D & C_D\end{bmatrix}We will 1-hot encode vectors for both, kernels and operations. We need $|\mathcal{B}|D$ bits to reprensent a kernel applied to a single dimension.Any expression $S$ is transformed into a binary vector by recurrently attaching the 1-hot vectors of each kernel and operation. When the operation is Stop the vector is completed with zeros. For example, let $D=8$ and let $N_{max}$ be the number of operations. $A_1 + B_2 * C_8$ Stop ...100000000000000000000000 100 000000000100000000000000 010 000000000000000000000001 001 000 Kernel Encoding: $ABC$$A_1: 1000 0000 0000 0000 0000 0000$$A_2: 0100 0000 0000 0000 0000 0000$$B_1: 0000 0000 1000 0000 0000 0000$$B_2: 0000 0000 0100 0000 0000 0000$$C_1: 0000 0000 0000 0000 1000 0000$$C_1: 0000 0000 0000 0000 0100 0000$$C_8: 0000 0000 0000 0000 0000 0001$
###Code
# one-hot encode operations
add = bin(0b100) # 0b100
mult = bin(0b010) # 0b010
stop = bin(0b001) # 0b001
kernel_families = ['A', 'B', 'C']
D = 4
def encode_kernel(family, dim):
i = kernel_families.index(family) + 1
shift = i * D - dim
return 0b1 << shift
n_bits = int(len(kernel_families) * D)
for family in kernel_families:
for d in range(1, D + 1):
kern_encoding = encode_kernel(family, d)
print(family + str(d) + ':', format(kern_encoding, '0' + str(n_bits) +'b'))
print('')
A1 = encode_kernel('A', 1)
B2 = encode_kernel('B', 2)
C8 = encode_kernel('C', 8)
print('A1 + B2 * C8 = ', bin(A1) + add + bin(B2) + mult + bin(C8) + stop + bin(0b000))
add
###Output
_____no_output_____ |
templates/Notebook_Template_With_TableContents.ipynb | ###Markdown
![alt text](https://github.com/callysto/callysto-sample-notebooks/blob/master/notebooks/images/Callysto_Notebook-Banner_Top_06.06.18.jpg?raw=true)
###Code
%%html
<h1 align='center'>Title</h1>
<h4 align='center'>Grade $\mid$ Topic $\mid$ Notebook Author</h4>
import matplotlib.pyplot as plt
import ipywidgets
from ipywidgets import widgets, interact, interact_manual, Button, Layout
from IPython.display import Javascript, display
def table_of_cont(boolean_val):
if boolean_val == True:
fig = plt.figure(figsize=(20,18))
table_of_contents = ["Table of Contents","Introduction","Subtitle I", \
"Subtitle II","Conclusion","References"]
number_of_items = len(table_of_contents)
ax = fig.add_subplot(331)
ax.axis("Off")
ax.invert_yaxis()
for i in range(number_of_items):
if i==0:
ax.text(0,i/5,table_of_contents[i],fontsize=25)
else:
ax.text(0,i/5,table_of_contents[i],fontsize=18)
ax1 = fig.add_subplot(332)
ax1.axis("Off")
ax2 = fig.add_subplot(333)
ax2.axis("Off")
plt.show()
long_name = {'description_width': 'initial'}
show_table_button = widgets.Button(
value=True,
description='Show Table of Contents',
disabled=False,
button_style='info', # 'success', 'info', 'warning', 'danger' or ''
tooltip='Description',
style=long_name,
icon='check'
)
def run_current(ev):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index()+0,IPython.notebook.get_selected_index()+1)'))
ai_button_show = widgets.Button(button_style='info',description="Show Table of Contents", layout=Layout(width='25%', height='30px') )
ai_button_hide = widgets.Button(button_style='info',description="Hide Table of Contents", layout=Layout(width='25%', height='30px') )
button_ctr = 0
button_ctr += 1
if(button_ctr % 2 == 0):
display(ai_button_hide)
ai_button_hide.on_click( run_current )
val = True
table_of_cont(val)
else:
display(ai_button_show)
ai_button_show.on_click( run_current )
val = False
table_of_cont(val)
%%html
<h2 align='center'>Introduction</h2>
<h5 align='center'>
Motivate and introduce the content and context of your notebook. Why are you creating this notebook? What will you teach? Why should the reader care?</h5>
%%html
<h2 align='center'>Background</h2>
<h5 align='center'>Include the background information about your content. Include examples and explanation to help guide your notebooks.
</h5>
%%html
<h2 align='center'>Examples</h2>
<h5 align='center'>Use Python or your own explanations to provide examples and interactivity to help teach and showcase the concepts to the student.</h5>
%matplotlib notebook
from matplotlib import pyplot as plt
from ipywidgets import widgets,Layout
from IPython.display import Javascript
import numpy as np
def run_cells(ev):
display(Javascript('IPython.notebook.execute_cell_range(IPython.notebook.get_selected_index(),IPython.notebook.get_selected_index()+1)'))
mean_exercise_button = widgets.Button( button_style='info',description="Create Canvas", layout=Layout(width='20%', height='30px') )
# On button click, execute the next cell
class LineBuilder:
def __init__(self, line):
self.line = line
self.xs = list(line.get_xdata())
self.ys = list(line.get_ydata())
self.cid = line.figure.canvas.mpl_connect('button_press_event', self)
def __call__(self, event):
print('click', event)
if event.inaxes!=self.line.axes: return
self.xs.append(event.xdata)
self.ys.append(event.ydata)
self.line.set_data(self.xs, self.ys)
self.line.figure.canvas.draw()
fig = plt.figure()
ax = fig.add_subplot(111)
ax.set_title('Interactive Plot: Click to build line segments')
line, = ax.plot([0], [0]) # empty line
linebuilder = LineBuilder(line)
plt.show()
display(mean_exercise_button)
mean_exercise_button.on_click( run_cells )
%%html
<h3 align='center'>Additional content I</h3>
Other supporting material, videos, or links.
%%html
<h3 align='center'>Exercises I</h3>
<h5>Question 1</h5>
Print question here
from ipywidgets import interact_manual,widgets
s = {'description_width': 'initial'}
@interact(answer =widgets.Select(
options=["Select option","Option 1",\
"Option 3","Option 2",\
"Option 4"],
value='Select option',
description="Sample Question",
disabled=False,
style=s
))
def reflective_angle_question(answer):
if answer=="Select option":
print("Click on the correct answer.\n")
elif answer=="Option 1":
print("Correct!\nReiterate main point.")
elif answer != "Option 1" or answer != "Select Option":
print("Provide feedback that points to correct question")
%%html
<h5>Question 2</h5>
Print question here
from ipywidgets import interact_manual,widgets
s = {'description_width': 'initial'}
@interact(answer =widgets.Select(
options=["Select option","Option 1",\
"Option 3","Option 2",\
"Option 4"],
value='Select option',
description="Sample Question",
disabled=False,
style=s
))
def reflective_angle_question(answer):
if answer=="Select option":
print("Click on the correct answer.\n")
elif answer=="Option 1":
print("Correct!\nReiterate main point.")
elif answer != "Option 1" or answer != "Select Option":
print("Provide feedback that points to correct question")
%%html
<h2 align='center'>Conclusion</h2>
<h5 align='center'>Summarize your notebook. Reiterate the lesson and important takeaways.
</h5>
%%html
<h2 align='center'>References</h2>
###Output
_____no_output_____ |
data_cleaning/05-exercise-inconsistent-data-entry.ipynb | ###Markdown
**This notebook is an exercise in the [Data Cleaning](https://www.kaggle.com/learn/data-cleaning) course. You can reference the tutorial at [this link](https://www.kaggle.com/alexisbcook/inconsistent-data-entry).**--- In this exercise, you'll apply what you learned in the **Inconsistent data entry** tutorial. SetupThe questions below will give you feedback on your work. Run the following cell to set up the feedback system.
###Code
from learntools.core import binder
binder.bind(globals())
from learntools.data_cleaning.ex5 import *
print("Setup Complete")
###Output
_____no_output_____
###Markdown
Get our environment set upThe first thing we'll need to do is load in the libraries and dataset we'll be using. We use the same dataset from the tutorial.
###Code
# modules we'll use
import pandas as pd
import numpy as np
# helpful modules
import fuzzywuzzy
from fuzzywuzzy import process
import chardet
# read in all our data
professors = pd.read_csv("../input/pakistan-intellectual-capital/pakistan_intellectual_capital.csv")
# set seed for reproducibility
np.random.seed(0)
###Output
_____no_output_____
###Markdown
Next, we'll redo all of the work that we did in the tutorial.
###Code
# convert to lower case
professors['Country'] = professors['Country'].str.lower()
# remove trailing white spaces
professors['Country'] = professors['Country'].str.strip()
# get the top 10 closest matches to "south korea"
countries = professors['Country'].unique()
matches = fuzzywuzzy.process.extract("south korea", countries, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
def replace_matches_in_column(df, column, string_to_match, min_ratio = 47):
# get a list of unique strings
strings = df[column].unique()
# get the top 10 closest matches to our input string
matches = fuzzywuzzy.process.extract(string_to_match, strings,
limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
# only get matches with a ratio > 90
close_matches = [matches[0] for matches in matches if matches[1] >= min_ratio]
# get the rows of all the close matches in our dataframe
rows_with_matches = df[column].isin(close_matches)
# replace all rows with close matches with the input matches
df.loc[rows_with_matches, column] = string_to_match
# let us know the function's done
print("All done!")
replace_matches_in_column(df=professors, column='Country', string_to_match="south korea")
countries = professors['Country'].unique()
###Output
_____no_output_____
###Markdown
1) Examine another columnWrite code below to take a look at all the unique values in the "Graduated from" column.
###Code
# TODO: Your code here
professors['Graduated from'].unique()
###Output
_____no_output_____
###Markdown
Do you notice any inconsistencies in the data? Can any of the inconsistencies in the data be fixed by removing white spaces at the beginning and end of cells?Once you have answered these questions, run the code cell below to get credit for your work.
###Code
# Check your answer (Run this code cell to receive credit!)
q1.check()
# Line below will give you a hint
#q1.hint()
###Output
_____no_output_____
###Markdown
2) Do some text pre-processingConvert every entry in the "Graduated from" column in the `professors` DataFrame to remove white spaces at the beginning and end of cells.
###Code
# TODO: Your code here
professors['Graduated from'] = professors['Graduated from'].str.strip()
# Check your answer
q2.check()
# Lines below will give you a hint or solution code
#q2.hint()
#q2.solution()
###Output
_____no_output_____
###Markdown
3) Continue working with countriesIn the tutorial, we focused on cleaning up inconsistencies in the "Country" column. Run the code cell below to view the list of unique values that we ended with.
###Code
# get all the unique values in the 'City' column
countries = professors['Country'].unique()
# sort them alphabetically and then take a closer look
countries.sort()
countries
###Output
_____no_output_____
###Markdown
Take another look at the "Country" column and see if there's any more data cleaning we need to do.It looks like 'usa' and 'usofa' should be the same country. Correct the "Country" column in the dataframe to replace 'usofa' with 'usa'.**Use the most recent version of the DataFrame (with the whitespaces at the beginning and end of cells removed) from question 2.**
###Code
# TODO: Your code here!
matches = fuzzywuzzy.process.extract('usa', countries, limit=10, scorer=fuzzywuzzy.fuzz.token_sort_ratio)
print(matches)
replace_matches_in_column(df=professors, column='Country', string_to_match="usa", min_ratio=64)
# Check your answer
q3.check()
# Lines below will give you a hint or solution code
#q3.hint()
#q3.solution()
###Output
_____no_output_____ |
Pandas/panda_marging_joining_conca.ipynb | ###Markdown
concatenation
###Code
pd.concat([df1,df2,df3],axis=1)
left = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
left
right
###Output
_____no_output_____
###Markdown
marging
###Code
pd.merge(left,right,how='inner',on='key')
left1 = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'key1': ['K0', 'K1', 'K2', 'K3'],
'A': ['A0', 'A1', 'A2', 'A3'],
'B': ['B0', 'B1', 'B2', 'B3']})
right1 = pd.DataFrame({'key': ['K0', 'K1', 'K2', 'K3'],
'key1': ['K0', 'K1', 'K2', 'K3'],
'C': ['C0', 'C1', 'C2', 'C3'],
'D': ['D0', 'D1', 'D2', 'D3']})
right1
left1
pd.merge(right1,left1,on=['key','key1'])
###Output
_____no_output_____
###Markdown
joinning
###Code
left = pd.DataFrame({'A': ['A0', 'A1', 'A2'],
'B': ['B0', 'B1', 'B2']},
index=['k0', 'k1', 'k2'])
right = pd.DataFrame({'C': ['C0', 'C1', 'C2'],
'D': ['D0', 'D1', 'D2']},
index=['k0', 'k2', 'k3'])
left
right
left.join(right)
###Output
_____no_output_____ |
Data_Analytics.ipynb | ###Markdown
General Log File
###Code
# Load "general log" file...
general_log = pd.read_json('data/general_log.json')
# ...and show a sample from this data
general_log.sample(10, random_state=RND_SEED)
# Deepening 'user_host' field
print(general_log['user_host'].describe())
# Show all the host (; prevent from printing the cell final value)
[print(x) for x in general_log['user_host'].unique()];
###Output
guest[guest] @ [185.9.209.177]
rdsadmin[rdsadmin] @ localhost [127.0.0.1]
rdsadmin[rdsadmin] @ localhost []
[rdsadmin] @ localhost []
[guest] @ [185.9.209.177]
guest[guest] @ [172.31.36.183]
[guest] @ [172.31.36.183]
###Markdown
Seven unique user in total had interacted with the DB (Non-Query included) and the `rdsadmin` a lot of time. By the way there are only two categories, "guest" and "rdsadmin".
###Code
# Deepening 'server_id' field
general_log['server_id'].astype(str).describe()
###Output
_____no_output_____
###Markdown
The server is just one. Comments on "general_log" file- `event_time` show when the event is happened;- `user_host` the user (name and addr) that caused the event, may be choosen as predictor...- `thread_id` the thread assigned to the associated process on server, not useful for prediction i guess...- `server_id` the server identification name, looks like is always the same, not useful;- `command_type` we are interested in `Query` command types wich is the majority, not a useful field for predicion...- `argument` when is a Query contains a SQL expression and we should extract more features from this.Let's see the "slow log" file... Slow Log File
###Code
# Load "slow log" file...
slow_log = pd.read_json('data/slow_log.json')
# ...and show a sample from this data
slow_log.sample(10, random_state=RND_SEED)
# Deepening 'user_host' field (now in the dataset)
slow_log['user_host'].describe()
slow_log['user_host'].unique()
###Output
_____no_output_____
###Markdown
We should aggregate this in just two categories: `guest` and `admin`.
###Code
# Deepening 'sql_text' field
slow_log['sql_text'].describe()
###Output
_____no_output_____
###Markdown
`SELECT 1` query is executed more often than the others, let's see if has a fixed execution time or depends on when it is executed...
###Code
# Query time conversion
def query_time_converer(df):
time = df['query_time'].dt.hour * 3600
time += df['query_time'].dt.minute * 60
time += df['query_time'].dt.second
time += df['query_time'].dt.microsecond * 1e-6
time += df['query_time'].dt.nanosecond * 1e-9
time /= 1e-6 # To microseconds
df['query_time'] = time
# Extract all SELECT 1 timings
select_timings = slow_log[['sql_text', 'query_time']].loc[slow_log['sql_text'] == 'SELECT 1'].copy()
query_time_converer(select_timings)
# Mean executution time of SELECT 1
print('Mean execution time of SELECT 1:', round(select_timings['query_time'].mean(), 2), 'microseconds')
# Standard deviation for execution time of SELECT 1
print('Standard deviation :', round(select_timings['query_time'].std(), 2), 'microseconds')
###Output
Mean execution time of SELECT 1: 1009.65 microseconds
Standard deviation : 3838.58 microseconds
###Markdown
Execution time of such a simple query has a relative low mean but high standard deviation, this means can be very variable around mean value depending on when is executed. Comments on "slow_log" file- `start_time` is the time when the timer start, may be used as order key if considering a time series of events;- `user_host` same as general log seen before;- `query_time` the total time the query takes, this should be the target variable for the model (needs some conversion);- `lock_time` the time the query spend locking resources in DB i guess, is a sub-time of query time (?);- `rows_sent` the number of rows returned from the query, using this as predictor may cause data leak in prediction;- `rows_examined` similar to `rows_sent` show the rows the query iterate before returning (maybe looking up for some condition), using this for prediction may lead to data leak (model perform well on test then degenerate in production);- `db` the db name, since all queries are generated I think this field is not important;- `last_insert_id` the id of last inserted or updated row in a table by this query (?);- `insert_id` the id of first inserted or update row in a table by this query (?);- `server_id` same as general log seen before;- `sql_text` same as `argument` field in general log when the event is a Query type, contains the SQL query code;- `thread_id` same as general log seen before.First impression is that for the task of predicting query execution time I need just the "slow_log" file since it contains all the useful information to extract features and train a model. Not only slow queries are recorded but also that one with 0 execution time. Number of samples is a bit low so may be necessary a cross-validation for better evaluation. To exploit rows_sent and rows_examinated is tempting but we know this values only after the query execute so in a real context we receive a query without no knowledge about the number of rows will be used till the end of execution. Dataset feature selection/engineering
###Code
# Define the raw dataset...
dataset = slow_log[['start_time', 'user_host', 'sql_text', 'query_time']].copy()
# ...and show a sample
dataset.sample(10, random_state=RND_SEED)
# Sort values by time
dataset.sort_values(by='start_time', inplace=True, ignore_index=True)
dataset.head(5)
# Aggregating user hosts in two categories: guest and admin
dataset['user_host'] = dataset['user_host'].apply(lambda h: 'admin' if 'admin' in h else 'guest')
dataset.head(5)
# Query execution time conversion
query_time_converer(dataset)
dataset.head(5)
###Output
_____no_output_____
###Markdown
Feature extraction
###Code
# All queries to lowercase
dataset['sql_text'] = dataset['sql_text'].apply(lambda q: str.lower(q))
dataset.sample(3, random_state=RND_SEED)
# Categorical features: Tables accessed by query
# Table names has been grabbed from DDL File
cat_patterns = {'use customer': r'customer',
'use lineitem': r'lineitem',
'use nation': r'nation',
'use orders': r'orders',
'use part': r'part',
'use partsupp': r'partsupp',
'use region': r'region',
'use supplier': r'supplier'}
for cname, pt in cat_patterns.items():
dataset[cname.replace(' ', '_')] = dataset['sql_text'].apply(lambda q: 'yes' if re.search(pt, q) else 'no')
# Numeric feature: Query number of chars
dataset['charlen'] = dataset['sql_text'].apply(lambda q: len(q))
# Numeric feature: Query number of tokens (this take some seconds)
dataset['num_tokens'] = dataset['sql_text'].apply(lambda q: len(sqlparse.parse(q)[0].tokens))
# Numeric features: Count nested queries or repetition
patterns = {'num functions': r'[a-z]+\(.*\)',
'num select': r'\s?select\s',
'num from': r'\s?from\s',
'num where': r'\s?where\s',
'num join': r'join',
'num order by': r'\sorder by\s'}
for cname, pt in patterns.items():
dataset[cname.replace(' ', '_')] = dataset['sql_text'].apply(lambda q: len(re.findall(pt, q)) )
# Show a dataset sample
dataset.sample(100, random_state=RND_SEED)
###Output
_____no_output_____
###Markdown
Plotting
###Code
# Plot settings
sns.set_color_codes('bright') # Color palette
sns.set(font_scale=1.2) # Font size
sns.set_style("white") # Whithe background with lines
# Plot relations between numerical features in data, univariate distribution on diagonal.
pplot = sns.pairplot(data=dataset[['query_time', 'charlen', 'num_tokens', 'num_functions', 'num_where']],
diag_kind='kde')
# Plot user_host by query times distribution
ax = sns.catplot(x='user_host', y='query_time', data=dataset, height=5, aspect=2)
###Output
_____no_output_____
###Markdown
It's clear that some guest users are slowing down the DB with particular queries, some queries perform way badly respect to others.
###Code
ax = sns.catplot(x='num_select', y='query_time', hue='num_where', data=dataset, height=5, aspect=2)
ax = sns.catplot(x='num_select', y='query_time', hue='num_from', data=dataset, height=5, aspect=2)
ax = sns.catplot(x='num_where', y='query_time', hue='num_functions', data=dataset, height=5, aspect=2)
ax = sns.catplot(x='num_functions', y='query_time', data=dataset, height=5, aspect=2)
sx = sns.catplot(x='num_join', y='query_time', data=dataset, height=5, aspect=2)
###Output
_____no_output_____
###Markdown
Save dataset for model training
###Code
dataset.to_csv('queries_dataset.csv', index=False)
###Output
_____no_output_____
###Markdown
길이가 긴 니크롬선의 전압과 전류 사이의 관계를 학습합니다. 데이터 준비
###Code
import pandas as pd
import numpy as np
filepath = './dataset/short_line.csv'
short_df = pd.read_csv(filepath)
short_Voltage = np.array(short_df['Voltage'])
short_Ampare = np.array([0.04, 0.08, 0.11, 0.15])
short_train_input = short_Voltage.reshape(-1, 1)
short_train_target = short_Ampare
###Output
_____no_output_____
###Markdown
실험한 데이터 시각화하기
###Code
import matplotlib.pyplot as plt
plt.scatter(short_Voltage, short_Ampare)
plt.xlabel('Voltage')
plt.ylabel('Ampere')
plt.show()
###Output
_____no_output_____
###Markdown
실험해본 데이터를 시각화해 보았을때 데이터가 매우 선형적인 것을 확인해 보실수가 있습니다.이제 우리는 이 데이터들을 학습시킨후 시각화를 해 보아서 전압과 전류 사이에 관계를분석해 보도록 하겠습니다. 긴 니크롬선의 전압과 전류 사이의 관계를 나타내는 직선을 학습하기 데이터 분석위 산점도는 매우 선형적인 모양이므로 우리는 위 데이터를![linear001.png](data:image/png;base64,iVBORw0KGgoAAAANSUhEUgAAAeYAAABkCAYAAABTnJOoAAAMbWlDQ1BJQ0MgUHJvZmlsZQAASImVVwdYU8kWnluSkJDQAghICb0J0gkgJYQWekewEZJAQokxIajY0UUF1y6iYENXRRTbCogdu7Io9r5YUFHWRV1sqLwJCei6r3xvvm/u/PfMmf+UO3PvPQBofuBKJPmoFgAF4kJpYngwY3R6BoPUCTCgBwDQARQuTyZhxcdHwzswOP69vbsBEMV41UnB9c/5/9p0+AIZDwBkLMRZfBmvAOLjAODVPIm0EACiQm45uVCiwLMh1pVCByFeqcA5SrxdgbOU+PCATnIiG+LLAKhRuVxpDgAa96CcUcTLgTwanyF2EfNFYgA0R0AcwBNy+RArfB9RUDBRgSshtoP6EoihP4CZ9R1nzt/4s4b4udycIayMa6CphYhkknzu1P8zNf+7FeTLB23YwE4VSiMSFfHDHN7KmxilwFSIu8VZsXGKXEP8QcRX5h0AlCKUR6Qo9VFjnowN8wf0IXbhc0OiIDaGOEycHxutkmdli8I4EMPdgk4RFXKSITaAeIFAFpqk0tkonZiosoU2ZEvZLJX8HFc6YFdh64E8L4Wl4n8jFHBU/JhGsTA5DWIKxFZFotRYiDUgdpblJUWpdEYVC9mxgzpSeaLCfyuIEwXi8GAlP1aULQ1LVOmXFcgG48U2CkWcWBXeVyhMjlDmBzvF4w74D2PBLgvErJRBHoFsdPRgLHxBSKgyduy5QJySpOL5ICkMTlSuxSmS/HiVPm4hyA9XyC0g9pAVJanW4qmFcHMq+fFsSWF8stJPvDiXGxmv9AdfCqIBG4QABpDDngUmglwgautu7IZ3ypkwwAVSkAMEwEklGVyRNjAjhtckUAz+gEgAZEPrggdmBaAIyr8MSZVXJ5A9MFs0sCIPPIW4AESBfHgvH1glHrKWCp5Aiegf1rmw86C/+bAr5v+9fFD6TcKCkmiVRD5okaE5qEkMJYYQI4hhRHvcCA/A/fBoeA2C3Q1n4j6DcXzTJzwltBMeEa4TOgi3J4hKpD94GQM6IH+YKhdZ3+cCt4Gcnngw7g/ZITOujxsBJ9wD2mHhgdCyJ5SyVX4rssL4gftvEXz3NFR6ZBcySh5GDiLb/bhSw0HDc4hFkevv86P0NWso3+yhmR/ts7/LPh+OUT9qYguw/dhZ7AR2HjuMNQIGdgxrwlqxIwo8tLueDOyuQWuJA/7kQR7RP+xxVTYVmZS51Ll0uXxWzhUKphQqDh57omSqVJQjLGSw4NdBwOCIec4jGG4ubq4AKL41ytfX24SBbwii3/pNNvd3APyP9ff3H/omizwGwF5vePwPfpPZMQHQVgfg3EGeXFqklOGKCwG+JTThSTMEpsAS2MF43IAX8ANBIBREgjiQDNLBeJhlIdznUjAZTAdzQCkoB0vBKrAWbACbwXawC+wDjeAwOAHOgIvgMrgO7sLd0wlegh7wDvQhCEJCaAgdMUTMEGvEEXFDmEgAEopEI4lIOpKJ5CBiRI5MR+Yi5chyZC2yCalF9iIHkRPIeaQduY08RLqQN8gnFEOpqC5qgtqgI1EmykKj0GR0HJqDTkKL0XnoYrQSrUF3og3oCfQieh3tQF+ivRjA1DF9zBxzwpgYG4vDMrBsTIrNxMqwCqwGq8ea4XO+inVg3dhHnIjTcQbuBHdwBJ6C8/BJ+Ex8Eb4W34434Kfwq/hDvAf/SqARjAmOBF8ChzCakEOYTCglVBC2Eg4QTsOz1El4RyQS9Ym2RG94FtOJucRpxEXEdcTdxOPEduJjYi+JRDIkOZL8SXEkLqmQVEpaQ9pJOka6QuokfVBTVzNTc1MLU8tQE6uVqFWo7VA7qnZF7ZlaH1mLbE32JceR+eSp5CXkLeRm8iVyJ7mPok2xpfhTkim5lDmUSko95TTlHuWturq6hbqPeoK6SH22eqX6HvVz6g/VP1J1qA5UNnUsVU5dTN1GPU69TX1Lo9FsaEG0DFohbTGtlnaS9oD2QYOu4azB0eBrzNKo0mjQuKLxSpOsaa3J0hyvWaxZoblf85JmtxZZy0aLrcXVmqlVpXVQ66ZWrzZd21U7TrtAe5H2Du3z2s91SDo2OqE6fJ15Opt1Tuo8pmN0SzqbzqPPpW+hn6Z36hJ1bXU5urm65bq7dNt0e/R09Dz0UvWm6FXpHdHr0Mf0bfQ5+vn6S/T36d/Q/zTMZBhrmGDYwmH1w64Me28w3CDIQGBQZrDb4LrBJ0OGYahhnuEyw0bD+0a4kYNRgtFko/VGp426h+sO9xvOG142fN/wO8aosYNxovE0483Grca9JqYm4SYSkzUmJ026TfVNg0xzTVeaHjXtMqObBZiJzFaaHTN7wdBjsBj5jErGKUaPubF5hLncfJN5m3mfha1FikWJxW6L+5YUS6ZltuVKyxbLHiszqxir6VZ1VnesydZMa6H1auuz1u9tbG3SbObbNNo8tzWw5dgW29bZ3rOj2QXaTbKrsbtmT7Rn2ufZr7O/7IA6eDoIHaocLjmijl6OIsd1ju0jCCN8RohH1Iy46UR1YjkVOdU5PXTWd452LnFudH410mpkxshlI8+O/Ori6ZLvssXlrquOa6RriWuz6xs3BzeeW5XbNXeae5j7LPcm99cejh4Cj/UetzzpnjGe8z1bPL94eXtJveq9urytvDO9q71vMnWZ8cxFzHM+BJ9gn1k+h30++nr5Fvru8/3Tz8kvz2+H3/NRtqMEo7aMeuxv4c/13+TfEcAIyAzYGNARaB7IDawJfBRkGcQP2hr0jGXPymXtZL0KdgmWBh8Ifs/2Zc9gHw/BQsJDykLaQnVCU0LXhj4IswjLCasL6wn3DJ8WfjyCEBEVsSziJseEw+PUcnoivSNnRJ6KokYlRa2NehTtEC2Nbo5BYyJjVsTci7WOFcc2xoE4TtyKuPvxtvGT4g8lEBPiE6oSnia6Jk5PPJtET5qQtCPpXXJw8pLkuyl2KfKUllTN1LGptanv00LSlqd1jB45esboi+lG6aL0pgxSRmrG1ozeMaFjVo3pHOs5tnTsjXG246aMOz/eaHz++CMTNCdwJ+zPJGSmZe7I/MyN49Zwe7M4WdVZPTw2bzXvJT+Iv5LfJfAXLBc8y/bPXp79PMc/Z0VOlzBQWCHsFrFFa0WvcyNyN+S+z4vL25bXn5+Wv7tArSCz4KBYR5wnPjXRdOKUie0SR0mppGOS76RVk3qkUdKtMkQ2TtZUqAt/6lvldvKf5A+LAoqqij5MTp28f4r2FPGU1qkOUxdOfVYcVvzLNHwab1rLdPPpc6Y/nMGasWkmMjNrZsssy1nzZnXODp+9fQ5lTt6c30pcSpaX/DU3bW7zPJN5s+c9/in8p7pSjVJp6c35fvM3LMAXiBa0LXRfuGbh1zJ+2YVyl/KK8s+LeIsu/Oz6c+XP/YuzF7ct8VqyfilxqXjpjWWBy7Yv115evPzxipgVDSsZK8tW/rVqwqrzFR4VG1ZTVstXd1RGVzatsVqzdM3ntcK116uCq3ZXG1cvrH6/jr/uyvqg9fUbTDaUb/i0UbTx1qbwTQ01NjUVm4mbizY/3ZK65ewvzF9qtxptLd/6ZZt4W8f2xO2nar1ra3cY71hSh9bJ67p2jt15eVfIrqZ6p/pNu/V3l+8Be+R7XuzN3HtjX9S+lv3M/fW/Wv9afYB+oKwBaZja0NMobOxoSm9qPxh5sKXZr/nAIedD2w6bH646ondkyVHK0XlH+48VH+s9LjnefSLnxOOWCS13T44+ee1Uwqm201Gnz50JO3PyLOvssXP+5w6f9z1/8ALzQuNFr4sNrZ6tB37z/O1Am1dbwyXvS02XfS43t49qP3ol8MqJqyFXz1zjXLt4PfZ6+42UG7dujr3ZcYt/6/nt/Nuv7xTd6bs7+x7hXtl9rfsVD4wf1Pxu//vuDq+OIw9DHrY+Snp09zHv8csnsiefO+c9pT2teGb2rPa52/PDXWFdl1+MedH5UvKyr7v0D+0/ql/Zvfr1z6A/W3tG93S+lr7uf7PoreHbbX95/NXSG9/74F3Bu773ZR8MP2z/yPx49lPap2d9kz+TPld+sf/S/DXq673+gv5+CVfKHfgVwGBHs7MBeLMNAFo6AHRYt1HGKGvBgYYo69cBBP4TVtaLA80LgHr4/57QDf9ubgKwZwssvyC/JqxV42kAJPsA1N19qKuaLNvdTclFhXUK4UF//1tYs5FWAPBlaX9/X01//5fN0FlYOx4XK2tQRSPCmmHjqC9ZBVng3zRlffpdjD+OQOGBB/hx/Bfwk5CODClAPwAAAIplWElmTU0AKgAAAAgABAEaAAUAAAABAAAAPgEbAAUAAAABAAAARgEoAAMAAAABAAIAAIdpAAQAAAABAAAATgAAAAAAAACQAAAAAQAAAJAAAAABAAOShgAHAAAAEgAAAHigAgAEAAAAAQAAAeagAwAEAAAAAQAAAGQAAAAAQVNDSUkAAABTY3JlZW5zaG906pd1DwAAAAlwSFlzAAAWJQAAFiUBSVIk8AAAAdZpVFh0WE1MOmNvbS5hZG9iZS54bXAAAAAAADx4OnhtcG1ldGEgeG1sbnM6eD0iYWRvYmU6bnM6bWV0YS8iIHg6eG1wdGs9IlhNUCBDb3JlIDYuMC4wIj4KICAgPHJkZjpSREYgeG1sbnM6cmRmPSJodHRwOi8vd3d3LnczLm9yZy8xOTk5LzAyLzIyLXJkZi1zeW50YXgtbnMjIj4KICAgICAgPHJkZjpEZXNjcmlwdGlvbiByZGY6YWJvdXQ9IiIKICAgICAgICAgICAgeG1sbnM6ZXhpZj0iaHR0cDovL25zLmFkb2JlLmNvbS9leGlmLzEuMC8iPgogICAgICAgICA8ZXhpZjpQaXhlbFlEaW1lbnNpb24+MTAwPC9leGlmOlBpeGVsWURpbWVuc2lvbj4KICAgICAgICAgPGV4aWY6UGl4ZWxYRGltZW5zaW9uPjQ4NjwvZXhpZjpQaXhlbFhEaW1lbnNpb24+CiAgICAgICAgIDxleGlmOlVzZXJDb21tZW50PlNjcmVlbnNob3Q8L2V4aWY6VXNlckNvbW1lbnQ+CiAgICAgIDwvcmRmOkRlc2NyaXB0aW9uPgogICA8L3JkZjpSREY+CjwveDp4bXBtZXRhPgpPbiEkAAAAHGlET1QAAAACAAAAAAAAADIAAAAoAAAAMgAAADIAAENl62kG9gAAQABJREFUeAHsPQeAFUWyvXkXlgyiYEDAQ4Kg4GEWA6KHeuaAGRQQTKh3KoeeGA5zwIiYzzNn/Ub0QMWAiAiKiIEgOS6wsPm9+lXVXT3V896SFNO9BzuVq7prunt6elIW4M+k+RE3K0sJkAakhaXl5CDL2C1bCMoQUJLl7axHUVD+Y6j2Lz4z8TP5z7Q/14OwC0n/tL1J9SlBGWb6X2b8yYy/Mm643oMg5Fh+tP21jz9ZfGDGDgxZseabUrKo0ISl6+6RRjAyrIPt9DLxM/nPtL9w+prpf9HMIxpBPJYZf2Ljtc8MIZnx1x54JQ8uOUIKlDwh/Vs7/mVBEkeA2OTBl5t2MYtRAf/7sSJQQCWSWRAhnmGTEiOZybw0As3KxHcdMJP/TPvDjsFnyUEHyfS/zPgTDbt+INZtBJtIjMyMv5KTNInRrF/t+MNnzLybaKOLFKfjMqdOanjQYFPGrZ7f6vVnktPPH+EtGW3jMTStcWdBLPpl4mfy79uCbSd+m2l/tn/YnmK3mf7nZjg6KYTbVhNxNa1xp0Es+mXGn8z449uCbSd+u4njDy9l86xALlhFR1jb6DgCbjAAxaCfA64Z2yJYicYth7fEJm0fw8mceiY+nhX73HCyMEEuy5wj3GTyn2l/mf7HA0dm/LHjpx0+7VZzLK62pJIZf39Xxx9/jdngNT46GPCWdySTtEfT/6y6lwUmAeFVPJIiZl+8ycTHLPGuoGzJCOQzpxCbLs8IchoQXsUjKWL2xZtM/jFLmfy7ppJpf77PpCC2u3h20KcCwqt4JEXMvniT6X+YpUz/wxwk8RqzP1njpmMbiG1Fqgk5tpaSjl0Vq0VonfBkLby4Ll7wFg48E8zEl0RxRnEjIyLliX5I15LiTP4xO25SKTmyOVNbzF2m/embhTL9z/axzPiTGX9/m8cfPGNOYi9VZ8pyTAjHNX+oUOxUNOrvtt07DWEzSQT9fBwr5W2gaNVoWws7UhBMFAU6fkASQb9MfJsHl13OUZAoJ0ZQCztSEEwUBTp+QBJBv0z+bR4y+efGwG0kaCguPQhqYUcKgomiQMcPSCLol2l/Ng+Z9seNgdtI0FBcehDUwo4UBBNFgY4fkETQbz3tzy5lk6JYCySe/ORmEZTRdWbvk+Tp9MVOi9EHnZrXait+BCof/maxTPxM/jPtr/Y+pPuMw313yvS/zPiTGX9r7zvSUQTqvvQrHP+ycCVb7unSRcEDLpaQj6LBodTrpCs/H6S9hkPSmjtrBCkHerHPxM/kP9P+sDek7UDp58PUreK/tOaZ/sd5zYw/mfEX+0f6LoKNgwVppZu9//mlbO7Prr/qvh2yHCVMgdpAORKxQK8WMBShUNENWY4SpkBR9tAKRCwwJnakkipUdEOWo4QpUJQ9tAIRC4yJHamkChXdkOUoYQoUZQ+tQMQCY2JHKqlCRTdkOUqYAkXZQysQscCY2JFKqlDRDVmOEqZAUfbQCkQsMCZ2pJIqVHRDlqOEKVCUPbQCEQuMiR2ppAoV3ZDlKGEKFGUPrUDEAmNiRyqpQkU3ZDlKmAJF2UMrELHAmNiRSqpQ0Q1ZjhKmQFH20ApELDAmdqSSKlR0Q5ajhClQlD20AhELjIkdqaQKFd2Q5ShhChRlD61AxAJjYkcqqUJFN2Q5SpgCRdlDKxCxwJjYkUqqUNENWY4SpkBR9tAKRCwwJnakkipUdEOWo4QpUJQ9tAIRC4yJHamkCo1u/hJmHHpvEWJVnKLoR+I0swmlxGhEyyqBNxKRQOVXUCtyCmn0UlmKw2hEZ+LjxJAmhZKSOJSkK2hVnKLop8gVwztHHutHRpn8Z/KfaX/SL9JA3Y0cbnuP60NRV/KaqSzFYTSiM/3vt9n/omvMtFuDnUYE/dyoLWvO0T4N1f3gG1ewXmTrpYx4yooDHhH0y8TnHZPJv28KsvYUNJdM+4v6SpAg7kR+43tcmDwrD3hE0C/T/zL9D9tCZvzxXSHoXtxnqJ8IIlCxSOx+XsqIp6xU8dwrOWMK4kXB9BrCRSg7TtvgdMy+OCP2Zlutzi7EjzKOoek1hIswE982HJW36MUtmfzrh4WoD/nmwk1I2pFKXgxNryFchN5hZJjJf6b/Z8Y/O7HL9D9147MeLngIkXEkGjv4jNmzCaEf5dLPAAhP8/NGkaYsiwj0VqIr0Ass4tmE0C8TH5Pgs8IpSdkosaCSd4Hexisgh3PrJYyImEMSJ5N/TILPCucoZaPEgkreBXobr4CcTP59WgSR9GTan8tIpv9hInyrkGYSQiUWVPqdQG/gFZDzO+l/KR+xsHWQmtiqBbyUWvvq14JEvhjDTXByEYnZ3pIhM+Bl4mPjStO6asm+buCcR9xk8q/6Z9jU3HAQMi3leJn2l2l/mf5X62iTKoj6EmO4yYw/6x9/wmvMepbi8+mQENiDKPKojbLIyf2OIZp+6WZ/cV2riVsl8KhDQsAWMkayyOs7Z0TTLxMfkxBLTozkPPFGCTzqkBBYbeRl9n+m/XPTcO3DtyWi6Zfpf5iEWHJiJOeJN0rgUYeEwGojL9P//pj9Lzww447WsxnXFqJ2IxgLrFQOjiKKt0HhO20kuadatXgApDPxJUMp3VlSGbTETP7t4OSTE29TTpBpf9SuouQwFpE2S0hn+l+m/9kROjP+xLtHOMZY6eYaf/2BudZCYGe2F+5Vr/XKHontRcX3tRFEZAL9mOAO2aInMBM/k3+6cQLbixw1fNPxSKb9qVTEkiEdyUFRFGjZIaVNMv0v0/8y/e+XHn/wwGzfle27ou+hHvGiOBJqSAd2WoEQCTeo8gyDVGRapmbxbOntPOIcpoJQ448Qn/JEuaHkhLVLrX1c49eof3SvJUWnnZp+t1K9UIL/M/vfNf30iVI79few/11duHH+Gu0vE983I+x/UW+MDx+Z/vd7G3/8GbMaEWLdLLbDWeo2tYwdcbbQFupt7c6iqBGmtRkXxzFBnC20jiy8yDTkRFEjLNJ1WGjixXG20BbqrTdBxPJXrFhuysqrzBbNm5n83FzmBx0u1URzGJd4IhBaRxae6Eh8oaNaR5jIPEQnZWUrzapVZaZRk6amsCCfRXHfQluot94TIqJleVHUCNPajIcmXhxnC22h3noTRETL8qKoEaa1GQ9NvDjOFtpCvfUmiIiW5UVRI0xrMx6aeHGcLbSFeutNEBEty4uiRpjWZjw08eI4W2gL9dabICJalhdFjTCtzXho4sVxttAW6q03QUS0LC+KGmFam/HQxIvjbKEt1FtvgohoWV4UNcK0NuOhiRfH2UJbqLfeBBHRsrwoaoRpbcZDEy+Os4W2UG+9CSKiZXlR1AjT2oyHJl4cZwttod56E0REy/KiqBGmtRkPTbw4zhbaQr31JoiIluXxgVnWyYMiKD3tinElY4dM0+kQOo2mcDaYLD/aeGm3mfjGJJMJM/nzieb+0Y+YnIJiM/zKy80WzRpyvmxa1TbIMxJM/9L5t4WY9Ok4M2rUA6bTLrubk0861TRt0kDtYyoblitoE0rs0Mz+xxTx7lNTMJve3/D+l/1I+xhxfxeS8Alm9n+m/Wf6/6aMf+qMWXco26dCh2qkwA7HQ4hmeXN1eBe5jLykwzwReKNUJFAhgn4ywv/S8Sm2LtDPGR/woAzmg/feNZcNvdx8OnGy+dcNI82F5/U3BQV0xiy/zRVf/MegDscizYjqP+fHGWbIOeeYMeM+Nof0PtLcfOMNZrtWW9s9JSY/ef9TAcSZxX/Z9vdHiY855ImSziXVLc0vRUUzov3vu6R34WREi0lm/9uJi8+JJMYnLRVJUdGMTP43pv9HL/qRNqlzmZp65qSoaMbmz786MNvAvHVl8EXxiKtELR2N1Ognh8+oZ0ZY3JW1oK2V8NYpeV0dj1VRwtdhxcxq0pZ+Or6cg4gvgVZTbyMfWU7J63rE6avyWDSyJQ0dXyhxEYfJRMJM/Pg9c1q/s8zMH5eYy4ePMFdcer7JRieiGyGp8a1s4+M7TwpEPjam/gaSZtpXk82QcweZd96faP5y2DHmtttuMju0bWWy2bvNhtRFoArs0E2Nj+auwOSBfhuSf6upt3/0+LYn/JbzTyWk38a1v8ggs/85ff/j7R9bAQ7KdOcUjU0Ec3NwJMrK5rwQOxofbL7s1kp465S8rkecvhr/N9f4a5eypbDrKIAXeURXCvE4Py0tTAdDwD5oLlK6erVZs3aNyc7OxgMUJhSPUlkIc3NyOKlZmOisrBy3A3AnJHAHJJNoSXiNSWDicJ+Yho0amTpFBbagHMsGLFtbakpWrDTZ6C+bfGXnmBzyjWKmKSYvzZE+LjOjv5qaGsRsrATtcNzj9evXN8XFde2Otq5ZnzdpaWFamKiuMuPfe8uce+6F5tvZC83Acy7GJexLTeP6da0f1QDEEouQvmXF+WlpYToYgjT7EBXcBEgs08cH8+kH/zVnntnffDNrnjnhlDPN1bgU37pVy1g+xEsYWLgpvn839cdq+kq4KqelhelgCNL4QIUNyv+GxV+5aoUpL69wfQrbPParnFwatLAfYB+g9p+EBI5r2M6xPyVw0ih4TU3SNG7S2NQpLIy1P1uJ0tWlZs2aUu43OXhvBPYgdIv9F/1iN8a6ZWOKyJ+9bJNI2Bh0CSeJZ/ENGjQ0desUSnVtEoP9nzQ11TVm5YoSU4Nls36zENoYORQEc0UTXeqfieoE6xXk55vGjRtSaVBuy+oT7Ujhpu5DlPyM+c/El0yHiRfuxua/uqrSLFq4ENt0panBY0BNotpUlpeZ0tJSsxrb48rScrMG2/vee+9hunTsgLtSIv088V1Pd20VgT7iSyhRYlqY648fnTFLJxBb51BIEUscDUXHN3gt1H4iRc21uAuwds0a89QTj5sx775tcnPzTR52LLoJKg//6tStgx0e/yEvJyffAHbuSuysldXVpqqiHAeSKt45FZXEqzFnnXmW6X3wgeyfQ+OmuqbKvPris+bRh/9jihvUx4GpwOQXYYy8PJOfX2AKEGbn5JksuvEKB40aGkCwbDRZABy06GBaVlltKqpqzHHHHW+OOfIwtMu1HT+lZblqIpD4dufRxCFhPhk/1gwZcrGZOGWa6XXYceaWm28w7dtuiwNntIclZb9U/uNV2OD4yWrz/pj/M/0HnWtmLyjhScY1V19pGtS1y/E0acpKud4s3ilBiFO9FYuyJ6SIiRf/iU6kHddwkkjRKSiGBFAsUhJSxKmeIx2NxfXYjzjzQsWQAIpFakKK2JsqRHQibSV06NqyNeaeO+82n372mSkqxDafV4AwzxQWFplc15/wSG2qcCBLYNvkPlVZaaqxvVchLK9ImLMHn2167t8DD4ZR+6SYFdj37rpjpPnkkwmmsKiQfebl5pj8giK8HFNocvOwr2IFaAJdnazBAyz2obIK7osEAfv0CSeebI496tDo2OXKLXWjA/iXUz83o++73ywvWY19rsjWIx/7LdYjH+tAE4wq7JeVFVWmAgfo1ThAt9uxvbn4bxeZBvVwAi3OfHoUQxKsWKQmpIi9qUJEJ9JWQoeyTqSouRaXADEdIUWc6llHFe1ULZakiBVDAigWeRFSxKmeIx2NxfXYjzjzQsWQAIpFakKKWEypPX05eZIZMWKE+XHBQj5xqqrC8R+PBWvXlpnKygqzFm+irayqNjfeNtKcM7CfyRFn4sR7p0Ao/C2NP1jB1B8eiYKfpgNcE9YilRN48kpJfEwr3W/RwkVw4nHHAZ65As6GGeZk50BeDv7l5UJebh7CPMjPy2eYi3gu8nJzcvkPZ9Jsh2e8cOe9o1NCrCldBYMHnEmrZkB+caYNeLYMObk5kEv+0R/HyCf/+ciL/GejDtlQuXJzC+CSYcNhTVl5ECN9rQIVfEKtGmbPmg6H9doPcEUAmmzdAd79cBJUJ1xW4k40HeCasDFSObHYRKJSbfln7bgTTQe4JnDaUl0F9995IzQsLoL8/EK48/6noDJUsYVB3uaI76pmY6xr+z8XPwkLFsyD3Xfvjm03y/cr6jM44bVtPh/bvbR55OVgX8hBOa4qcRuldvrAw48CnulGP1w2wr0OSxcvgvZtW6EeTr2oP3Gfwj4i/Yn6lOuv1Icpbnau7Xdsg76HXjUCasRzvM0gjQdzeOqpR6FZsyauDtgXcYzIpfKrMYHHAio/lpvKstue+8CipSvEs4X/c/s/rP7m6v8UJb7rYpEt+TPkP1FdA08//ijUr1cXp/s85fKQ1lKZl5ULRQ23hdfeeS8sxs8Qnx1y+09xHTLSURsQPzpjplkEThqCkxqZRcgMIz7jEFrsatNnPdpQCJfG1JMjjl+NyxE/zPjWLFy4gGe/pWVrzeIF88w7b75qXn1rHC9TUTlpaYpm2nRGvWOHruaYY44wXbt0MnXrFplqXM4uLi42nTt1NvXq1WG/dt8hikvQc2bPNrNm/oBnBmBoWXvZsiVm7DtjzPMvvGqq0JbKmORyJnm232yrlnjHcR/T/c+7msZNG+GsDPl4Vr1Tp06mGdJUHsC2YWfk6mxC8kFVZ6EtSoJmcdcOM1eNuAUX9+qbq2++zVw2pC8LadLmPYg9Z46Tp4SK5vgUghLjrW0wIoP4687/pux/CkuFpviLF843f7vgXPPE86+aOo23M6+89IzZd89d8WSMlKgoP398m/9fr/6/h/gJPFOdMnUyXsJZbWqw/U394jMz+p47zMz5S7lp4lEVTxroD3g1qmGjxuaAAw8wRx5xuGm+RVPsBwWmS+cupmHDerwf3c7k/Q64tDxl6he4zLzKVOGK1DffTDWPPXK/+fyL79mf3fNogW0kOzvPNKjfzPT+66Hm0N4HYf9pgvHy8My2g2mxRZPaxx9cni5ZsdRM/XKamTVrlnn+uefMuHHv4aOF5Vj+LOxH9MvBs/V6Zoutmpuz+p5hunTZybTdoa1pv+Of3DJmpv39kfrf8sVLzIwZX5tyXMFcXbrCjB83xvz7wcdNCa76JPnSSY7p0HVf89qrT5tWLRpzC/nd1J8P6GmmOSFLUQpNNxnYKJ74EqiMNSuJc+n33v4/2KZZMfVx/KOjIM3m8+CUM8+D72bNh6oabaEcrQt1M35IVMGyhbOhW8cdwM626PhBCx+5cNSxfWD6d98BLo1bT5sQJioCGdfA2Deeha0aF+NwkgeHHT8AlpemOg05ilJo5HcTMfElULkJWYpSqFL3aDJRDePefAU6tt2Oc9j7qFPh+7mL0s+kxZdA7yU+81YKClXqm4aKL4HKS8hSlEKV+qah4kug8hKyFKVQpb7RaLKmCt588SnYqmERDmHUn6jN45Xg7Fz4y2HHwoTPvoSyimq331RQhdYaFNvAB+++Bm1aNoEc9In3iDAk3/sc9FcY99FnUF6Jp97OV5oTDxG5EOo0nWzQAA/S8MB9t0PL5o1pLoljQS6077Qr3HnPQ7Bw4VKorlI2tRVU6iJQ6YUsRSlUqW8aKr4EKi8hS1EKVeqbhoovgSrrnsWeFaXQTQuqrMSXwDQiy1IKClXqiCZh5rdToHePbm4MxzOBrEI4/ISBUOaXYkILX900PkOWohQa87bxpPgSqDzQjDbNL41mGhYbMl8JPeoR6keRqosWSR0jAFpKeAKeePBeKMjhq5TYEen0MwcKi5rBw/95ERISgHx4U4+sJz7pVcOPuFPrFeDBnn3jYIIH/oI6zWDkqEegsqpK+aUg6sdholjrjl8DC378Fg7crQM2mmxo1qIjjP0Il7CVO4sqfyJLw2LRRsUXZ7VX56fHT8Ka0pVw9aXnQS4OykX1msONdz8MayuqcC9Gv9qq89PjS4wogjSPiLM56//7iT/vu6mw585tebJLBzea8BbgJYjhI26H0rJKqUitu4QFtbS/JXO/hd26tMH+RH7xwIl/eQV14eIrb4OS1WXraQt6T7lipLBqYOZ3U+CAvbvzgblTt73gtTHvwdqyCtcFI4PM/k/dhVF2XH4Z2B6KN89Fw5iI0xu4jqSEHvXIesZfCaBhZOu5aVgsYz5tkjB96kTYG9sz3cZIx4icwmZw4z2PQzU2AKvhvaXWLxKxr4AkYr3xnYXX88gm1R8PzFjoyEfa+CIWqMupea5oMaA1NC5q649fU1kKV/x9kO3gdB2LD545sF2HHvDpF9+lLbN4DzOaLj42w2QVvPSf0VBIs3s8YLJ/PGNu17EbfDBhCh74rTdtLbjAKF4cEw2ENaVw+3WX8TU4k1UAQy67HkrWlIM+2Rdt7UV4AkkmuECtH+JaQ+Oitf78i5VAshRcoHizkiTM+eZz6Lx9Mx7sd9xlb/jyh/k4GNsOEulaT7/19id1FGhLbWuheWG9hNIaGo/kv1b9y5bNgcN7dud+hVdAGOYV1oERt94HpTiRkp+UWiDxBRcouhYmoXrNUjhwj534gEz9iQ78+UXFMHTEKFi1Rg6e1tOm1b8Svpz8AezSuQNk5dWFp154A6rxTs3Uny6hxkXz527/4legjqnxSL5p9bf26TyK51SNdNq6/gmeMFEWdSbFSiD5FVygjZVuqzU0Lro6fuRXpATFSmDIE24NvPnKM7BF/bru5Cob6jVrAx9O/oHvXbD1Ed3Q+28n/1Fd1RmzKzQDXQHFT5HVkjVdb8GVG2Yh7VhOQytoSRIWz/sejuvdgw+YNPPGhyT4APqXYwfBslXufFObSEwNtXviI21ZuMuS5dDv+L8A3tfJMezZQw7sfeBh8P2c+VaPlNnAWnnXQgr0ghiCe3/KxPeha7tWXP7tWnfFG74mRje8uNKkxnCOCTAaCySkwFhYTzq5V0PE46ykFbRE8RnVMimTguyLukElXH/FECikVY7cIjjvshugopoOze6HiMeZ5SgGWqL4KTI0FFWB4j8OlRsJF5poBS1RfEa17Pcdv3zFj3DUIXu5g6c7MOMZ8+V4xrwifuZJ1d7g+uN+riyBXnt1tr5xsktn43kFdeCCK0ZCyaoy9hVm0lEpMRTfy5JQWV4Ct40YBnWL6sKwa27Ds6L4Do/Ryg1LkA5NtIKWKD6jWiY5UTAW1pPKDfOQDj1pBS1RfEa1TMWNsX1cQZSb9PETMH7MazDytpG4ive5WoVUhozGAgkpUOLFoZN7NUQ8LgUiyEwtcTiBFBnxiJmAmoo18MDIa6FuQQHfLmGy8qDdLj1g3vJVdoxVbtCAfTkWk855mhjKkNHQylcixnZOI6DcMBPp0EQr4M1PkWWI6VlEOkdabl2qMIIyTH+WFEZLpSL/SZj44dvQbpst8GCMd5TyGW0udvQ8OOfSmzDp4l+Coi9BNyh+AkqXzYc2TelaG50x0+weD/x4Pey0Qf+ApSVl1h36ErdU2qh8wldSQRlS+ZKwtrQEbrzqEigsyMN71grgxNOHwBIcoPTMVGdB+2c++hK3Gx9fe2brOCOF/mnxqaTVMPeHKdC5VWPMaw402monGPPhl1xf2WMpQRXjp8V3jnz+lWNGdSbjMkv/L8RPVpXA6ccdiu2dJqR2UpqDd1APvPhKWLxqTZgYTJnOms6P5YuUFKlVV0Gfw/bn63102YmvYefkw7F9L4FFy0pC32ko7Z/F5NbrVcPkCe9D+zatYI/9esMP8+iu60jqUWala21K1/sMkXXHx2jKhUVTGK5Iv4f4WPZkBZxxzKGQi+Nen74XQFlV7AIbqagU/XbqT/lNwLJFc+CCM461lzvpGJFbF04fPBRW4lMzqZeYdU1UpRSq68fsX7j+9sDMQWOFTSmZKjWi6ZpbpCG+BDqJkAJlVyOd4s/FTyYq4KUnRwG+dMMNHnRRHx+XqrMlPPvGh+whxVb8eriu+DUwYdzr0CAPH5/CwYlv38RlbHp05N5Hn4MKOqGOKuax1JhehIhYRPDbL8bDPrt2xIMyPsqRXQxPvvxffJTIeVlH/bVXjW9cfGcZFSdkbJb41VCFZ02XX9AP8vHmn6ysfDix36WweCUuY0o5JE+bJT5V0Qey9RVSoMj/V+MnV8OAk47wB2a+FoyPHp1xwTBYVLLatZH0oPb2R8mthkTZSjiq5x6+T/GBH/vt0addBAsXy4HZ7YiNyj9GrlgF/U85Cho23hIefPzlNEvYzq/sX6lCCntT4ltntdef5CmBxCiEoofqKf6iTmJtYtsU/UC+sfFpJasSjju4B96klws9eveBNZWxewwC/2nKG8g3Nn4afxtV/wR8O+0z6LlbF77JkNpxTkFTuGX0U1AeW6XjYkrxfjP5T60/PT6U8tMsfAMQWdn6iEAgcUVmVTwt/DhbaIJsqn05YcTCG4lWLYfrh52HN6XQWTLdPUrXgAtgi1a7wo8rKvwSlrcRRGCKT8fg+HTUrcYz2b9DAfrGV2HwErnJKoKiovrwzvsT8CCCjpwv31a0b5GJW5EJZH4CRt18JeThAYru9G7Rbm+YuWC1PVsO9KwTzfp54vsqSCkZchwdzEk1a9Pj01lTOYx7/QloUb8I650DxfXbwOv//VTSuQH7H8v9s+T/16h/kGpfZ83dvPnXkWqrfwUMHXw6HzztTY846c3Oh/3/ejrMWrSMHWx8/qlWNfDsI3dB43x8lpgmu9RvGWbD/of0gZnz7F36m1b/JLzyxP1QjM9d//WEAfDjgmWcW99mBRHo0hAjbd1om0agWRtf//QBtU+nYUOnEWjWLxOfzilr4ISDeuAYmA979joJSivtPQa/THzJiIUbW3982yN8MOYF6LB9CzcRzIYGW7SDl975iN+jQP60T4nGvDQCzfq16h9bytZFouJrWuOuasQStsctw29F7kwY+COcZhIeV66GBXN/gGMPO8geNLFz26XsItij5wl8UPauyJTNrQ+/tUgYyBvRtYkVcNZJh/J1CXv9mpayC2GHnXvBjJnzlV0aR8QStsctw28xVtXqBXB0rz1xUkEDVR5cfuNDUEbHrZSfOBOBpjXu5MQStsctw29FLi4J+vprJgtiDG2scadGLGF73DLsthLm4HL20T33wju0ceKTVwzDRtwLq9bWNhsXZ1IMTWt8Q+OjTRqzX67+v/X41fCvvw2EPLp8Qzd/8bVgfKTpL6fCLHzkKExemkQSS9geT8Ccb7+AXnvgTVk00aU/uqmS3siJZ2N7H3g03rexQHZwDIozYWua8CqYN/MrOGDPXaEBni0/8eJb+FIep+vjWxu/tYg4tPAXa/8Y/HcR3x6YT+y1L7aFAtjrkFPxjJkOzLrwGo/n3KmyitXz2zRmP2//w/G1qgJG3XoNNMLLhDwRxJWZ7vsdBVO+mwVVmz1+PBe+5jYpmxifD8w8K3D+7c5Q3hjFjf2fZlcp3UDqHTqXWk9VhoofdBTSE91K+Prz92H3bh1wJkQHTDv7zsprCiPueMJpiS751DjR7scu08iSNTDn+y/hL3u7m1Q4Bp2V50G/86+GNeXYYNkMN/Z/EMF6tFsbSePEIboCpn70BhTl0CMj2VCn/rbw7oTv+bqHVLv2+jsXLjh50xEsnspBtfDHhlrPiR1r88VPQFnpUhh+8UCsP+W1EA7ofQp8O2uBrcdmj6/qKcnWmfmfj4836F02iG96pEkpvv4D91EOdNn7rzB97kKbKc4Rbuz/9bS/BJSvXYk3ZQ3Fa305eGMW3oiDb9+is2Xbd3Oh+76Hw/Qf5inf6DrYNxSQgyodCp6E8rISuOOGf+LqWT6cOvDveB2c7v/Q+srOWkc+ghhO6NQ3LL4tlY5g8VSODs04qfym47sDs5wxH3IKrKFHROnH1cON/W9JK3E4KwQcR0SAVDZb/bHN4RMAFw48DScV1M7wOJFdCMeefj4sXK5WJTdb/M1Tf3+N2e0Bvx+CPRClOMJ0eSjv7o8VAiIyESxFzL54E8RP4g0Jbz33ELRoYm+Bt49JZUNene1g7Gffi7vUkCkBAlVV1gS8+/qL0HYrvEGJHxehJTd6sUgR3HTP0+tOgS2udxyE9AQ2+BocqK66wDYYHPgOOOJUmL+qCg/MygGjlvaoEvsgGonJifSsgNBGFk8Rs6G19qglU42FE5MT6VmI2H6Aw2bNWnjl6fthK75HIA/a7Ngd3h4/BfCl8+LJGVqat7RR4khRYTF5YBIQysahKWL2xRsbNkUh1Ue8fIFJQKTapoiJ4Rx6lJFUW8+JyYn0rIDwFh6xYtpWw3/uGgGNCvOD9t+qwz4wZaY7eHqrGOKDWT4dIPF91jD547Gw284d4YB99oKrh/8DGtJ+x77F/QvPmDvteiB8+uXMqKxkzr6sQ48G/pHAS07TJn0APXbbGRriYzATpv4YPP9P6t4kIGz59DZFzIbW2qOW1GYhHpMT6VkBEZoRlSJmQ2vtUUumGgsnJifSswJCDCKYIuab9arhpIP2xZXJQtij12nB43KRpcJ8MMsLfAaEsnFoiph98cbWIUUh1UekCLB8Lr7e+IDdeIylS4U5ufVg6NW34+QiGGW9kxT3mxzfuwz3aUqASI+wFHGa+PSVpNhPM5QLx9ZSDsKMWoTiGcXUcaOf4NSZI67FIkYNDuq3XXupX8bOwqTT2ez2HfeB7xcsthVkdWcTmYZOkZ8aH5mJMnjigZuhYR0amOiM3D6G1aBFF3j3k2/QBzl0TkPg/csBiBlOxwvxKnJFyWzYZRu845tnc7kwZNiNfFC2K3Drrv9Pj+9KguVKrT/J1h2fbLzdJtWfYlTAd19+Ah22a8kdp7C4Bdxy37OAHwFZb/xfu/5/7Pi0QyvhmftuhqZ16RETmpTSqkYObNtuL/h0+hzePzYHDo0AYvYXtf8klK1eBZdfdA7k4w1kzzzzHLz52nOwZaO6fN2Pl7PxwNxhl/1gwpTvxHqd/T/KP6684Jn4NUMvwHew58Gdo5+GCm7TqMHtspbG6QtJnpwO8wRHrqCim6IX+o6r/zHi0x3YVXDKQftAPi5l73Hw6bDaL2X/9us/aeyLsF0jHMOp/eKlwvoNW8Erb4+PJm5Yhd/b/sczZpt43rp94NuoQ2phx9XEVQTT2ZOzwKEleBvwcdgoXQT9Tz4SB3Q746a3cdFyaF+8DX7Z6tjjHBRL7AWuM34SVq9cBv/825lQlGeX2+hsGT+Ex9cnlqrXZMbcOa9pgCgKxAPz91+8D0U46FGjyStqDvc88goPBl7lZ8m/84YAvwYEy5cvh3nz5sP8+Qtg6ZKlUFZmnxvlEpNqFJwJarSks3jxYrSbxxA/m4YfLLDNWS7jpalxyBK/AlmagOULZsH+XTu6CVYB9D9vOJSUYpn4Z5V5G9g5MYJa2JGCYKIo0PEDkogUhmMFfHEaU4/YqZjYC3QaAUlECuPXiE+FqIKxLz4K2zZviAOau0yEB+Ym2+wM70z4wdcvKK7nxpEkTBr/LuyIjzAddcxJsKp0LXzy/uuwbVN7YOYnHujGx9bd8AbAz5Sx9c7bWgIR+4vxb0CHNtvA/gcfDfj1MmWvULEX6EQBSUQKIzX/+JlXWLVyJcybPx/mYl9asmQx4Gctse8Gxiq49YtfNeI+R/1o0aJF6GMVVFWrR482ML52vI6IWi2qV8wgINPGp1evVsApeC9IPo6vux/cFw/MUZkD+zBiSImiQCcNSCJSGKn5144DdS0gR8lKeHzU9YAfDMUxltpwPrTYrit89T290Mj+aEwsKVmB4+F8WDh/ISxbtgLKy8vROioMx6glUC3soCRMiKJApxGQRKQwUuuP3x3gTxBwlegl81Q1hgj8j1RwyksyOkqSmv+l0/dC5Q590JpWrbbiRyB+RmLp7K9NT3yR/rSZS9kWPx2Bl8GKzY2332bOG3iqKcyzuyJ0qoIj6t2lxAczd/YPZtCZp5m3xn5salgxDw3yzMn9B5vHRt8Uud3k+lebpx6+y5zR72JThSXZtv2u5uGHHzP777ajLaQvnCqo5klV1hOfPqCRwM+dffXlVDNp0ufmk88m8Yv+6RvTjRs2NLt06Wr+3H1X03333fGbt0XilZOTwI+GfDZxgvlg/IdmwsRJZvnyFaYpfne3bdu2+HGQjma37rubNm1bm1w85d+0/Y/fSV2zylxxybnmplFP4EQw2+x/6InmgfvuMtu3wA+ASIOQeguMSolxkfmLtz9VgD9kfMwpVJtJ4940J/cbZGbMXoApxv6EH4Jo0LK9eeKpJ80he+E3bCkNG1D/itJlZvDA/uat8V+aO++5yxzZu5eZPuUDc3jvw8zcRWvpS8w4u841TbfuaO4ffbc58uC9uD2tf//jJ1dLFpuLzz/fPP/mx+amO+4xpx7f2+DX5uwOStderIS3Xox12NDxZ87smebDDz/AT2ROMtOmTceRCLBPNDHt2u1ounXb1eyxxx5mi2ZNg/gJ/F77V9OmmvHjx5uPJ3yKH+FZjB/UqWvatt7edNqps+nWtZvp0KkDltt+BjUsHFJSUIHWu4uBzM3a/qsxfsKc0utA88w7k023Xn3MmFdHmbr4Sc0N3f+6uBr31dmI/Ps2IY5qbX9gaspKzN/P72/ufvBFk8D2m8Q7JvbDT+g+/Z/7Tb1CMF9NnWKmfPG5mTJlivl+5ix0nWdattzKdGjf3nTstJPZZ98e/PnTXzf/WFFJlMDg4O2O8Ax4ZlirNDzoi53MBjQUWQCdXwQOC6TMxevLH7/zEj8DS4dfPizgbL5x8y7wylvv4Wss01gSK/4X82xJUkrAtM/GQad226JvWsLD1OAZeWHx9jD6sVfQjzhK6yB9ucWEIT72js9bXtT/OCjgZXIDu+3bG77BD274WqMeqab9bXB8fHnJmtXw+COjoXu3XaBunTrQtFlL6Nv3LDizX19o3qQeFBYWQts2baH/wPNh8bLVGNNGrSlfBfeMvBk6tW+HOkXwl0OPgAuHXAjHH304NG/aEHBggX0POhxefweXhfR7Q7HAactNzPgfMfCSwZP332KXNPFyQZOtO8O4CV/x24XS+qGEbHD9Sdn94rFrd24NUF67ijgT5yFMaycmGoZmjnLWCByWqrXZ64+XEvCNd1M/fhv7QCvuX/Z1tLlQp2k7eOSl97BMtZYulOArbZ/99yiog23ojMF/x88s4hkt3lg5Y8qHeP9Gfb4j374jAN8/X397uO+xF219sY61RnD1T9aUw/89+yi36z5nnAOz5y/jM6G0dsSM/6VmNqoX6kZ+sCwY641XnodDD+nFnxNs2KQZHH/iSTBo8NmwU8d2UFCQD02aNoM+ffrChxM+96/SLS9fA08/8Qjs9udu+JhlEeyya3c495xz4bSTT4Qtm9YD/NId7LlXD/jPky+40rioQfxYQTf7/rdpsCWhs+NyOAmfO8/na8z98BozPTkRZSdWuvQSUo//xQ2Zdn4ROCxVa731J8sELJozHTpsW8+NLfbFUzfe/iDMnLsErh1xC3TssBPULa4LnTp2ggH9+0O/vqdBUUEO7qdCaNlia7h8+L+wvS7fhPipJil1r71y1ngd9fdL2ayZxlHIcpQwBaaU0QpELNCrBQxFeBQfeE+Uwl03DHMDBh00aU6B31fd+2iY9t2Pdod6fe/ZIVYgYoFeyzFefeI+2KoRPWNL1ybIPy61tekGn31N19fsL7R1lDAFirKHJKiBkiU/wF/325OXcKnsBx12Cj53iY+hBHaKUKi4ClmOEibDBFz1j4sgH+96zs0rgrtHPQpLl6/kly4k8YaZeXNmQM/9drcTGyzDnvsfid+QroGKstVw23VX8LXvjjvtDG+88z6UrqnCoiVw+W0h/OvqS/B6HjX0fNgLHz346ts5KEsXX0qqodWLilmJX5x6yt1ZT5cN6sKDT73mrgGJFtorVLyFLEcJU6Aoe2gFIhYYEztSSRUquiHLUcIUKMoeWoGIBcbEjlRShYpuyHKUMAWKsodWIGKBoZgW+srh+28+hV0778jtg54aoHaa37A13P3km6we2jpKmATxALwAvy3eqfV2+FGW7eG1seOxBZGgGmb9MA1269CG7/q2S9n4OEvBlnDTnY+yb7sRZ+RLsRlNwJK5M2D3zjtA3XqN4LnX/msPyqInMG7mHIlYoFcLGJaoriqHe3GSWq9OAfajfDh3yGX4WNdcfEFFDU4gEzB3zvdwap8j8V0ElKMs2Hf/w+DbmXaceOGZf0PzZg2gYZMt4YVX34LSteVoU4OPYq6CJx69DxrUr4M2BrZq1Rk+nDgtVk9VGIVKWUOWo4QpUJQ9tAIRC4yJHUlSugO7HE4+cE+8szlcyg5tHSVMgd6xIFYgYoEiDfezkipUdEOWo4SJE8tPxr2Ca5z0xAudXNF4lQP3PfgEHH7MKbhAg8vaLVvDE089D8twTKzBk4skLn2PfetVaNuiKS9940IEnHfRUB4zw3LZEkiogBKmQCmsh1YgYoExsSOVVKHRzV/CjEPvLUKsilMU/Uicpn5KidGI5okR2QqLYQKqVi2Enn9upwZzSjy+VKDPOXgb/Bq+llqN125q8CaiKryGQDj9VeGr5OjP0o6Pt/57GmX00fWa6koYet5ZdofyndjY4fDGh+77HQPz6MYyKY+ql6BW5BTS6FlWFXw95X3YrdOfeDaXg++LHnDBcFheQtfGIyMfR1hxKEEVtCoJvkY+/NK/8QfpDzvyBPhs8tfsWVzYOEm81odnRW23wcEXPyqf3xDuvOseePzfj+CAUQ922qU7fDxpKt4lbQOsws/p/fPSQdCsQT6eIeDrQ/EO9XoN28PLb3yEg43VsSAgVOl07YRdBT989RFsVUzPGdIkKAuGXnsHlNI1bClsbVBcKLjx8ZVzRoXGsgpaG4zFpcmJVXUGCIRDqsT1Pr2t0xUF54EtxY0YiapA7yNCrEgMI75gqaaKwyht6K8Cvp/xOey+S0c3SNHLe3Iht842cN09z4i7FGi9WR9VZaVw3RUXQx5OCs+99BpYVYEHMrbARxHxgL1XFzzT5IHTTnyzcprAVTfe43yua//j86lly+Gum67lM/EBF14Jy0rtF8qsf1sKn0pVSidJz2GhaNj41ZVl8O/778KDa1PsD3vAK2+8i595VeaMJuGHGV/Azu1a8EE2r6AIrv7XjTBl0kTYofU2sGPHrjBm7Ed2/6P+2rX4Lm88sahbRF/BK8BHePJxJb0uXHfz/SiV+KqtCCsO48Xw1k5R9JVeKktxGI1o2+xo9aQSTu65pz1j5mvM7nEp5VdQa+18RK5ErGonLKXEaERLs/dGIhIoLhS0Ipz+Va+Eu67/h51U4jP42dk5+JeP72RvCkV1W8KJpwyAWfitA51vipPAsf+Vpx6F7Zo14rGouF5juOk2+rhK7F4cFVOjNr4roAOp8lo4rB8Z1Vb/LDKXpXzqOjxu8iK/sOkqA+JUfYcytNxI3RoLF2GIWgZ7suZBLCf1vKSZ9+1U06FDV1OexI+g07QmiXOirBzTbb/e5tSTjjLFOUkOQEXiQnA1bAHtlWzE1bUJKSiFyMWrRtnJKvxQ/O1m/OczsGr4kDEKsnOLzKCLh5l/XX6BqV+MtxOQb6n0Rte/2ox98wUzoO8g88OilaaouJG57pZ7zZlnHIPXbuz5g5SJw9DG19/Ww/IdHsRPmqry1eYBvJZ3waXDzUGHHm2uG3Gd6dS+DX6InhYXbMlt/RNm8cIFZuiQQebfz71pEjgE/6nt9qxTWpk09z34kDn4gH3wGjLa4cfox499wxx9bB+zrGQN73I8MJu8/CbmoX8/ZE4+tifuA/Qt7rGAvNbANBdeKmGLLixTY5b+OMMcfeSRZvzk7w2+JMYcdfoFZtQ9N5smRbhfrfZG1F+59zEEERgWZdbs2WbZ0qU4EU1yPNJSrRFTT/WnujGGOqJB2UQ8yD8bs4avPzYguoZJPsnCWhFuOdaCcHy5FrVnbq9S1iyTX1Bgtmm5tWnajD7obq2sjcPXF9+WhqOFO0giWygRrXql+XHWLDPwtNPMmPETsVdQLHwTSE4jc94l/zB3jBiC9DriQ6WZMuETM3jwILOkqpF5+f+eN+1bNUcP1mbhvDlm8Ol9zJtjJ/A9FnT9j/xfcOkwc8t1lxt6tN3+SB9/XDjHw+vf07+YYPqdeY5ZXJZlnnnuBbNLJ2zfKOY0a3U2Z2Phkjfnz6KyFS0bi8aQhBn31qvmrAHnm4Yt/2RuvvkGs++ef6YsOBNyjnqY/2TVSjP8H5eYa297gOnWbXcwjRs0MDPnzDcjbrjZnHrysaaogK4hg/nwvbfMgLPOMt98v4BGGyw0nddlmSGX/tPcev1Q1298aWwsJoWn4yO+Wfa/xKrB+Alz8kEHmufe/cx07XWSGfPKKINvV0O+3ZebN76tfrT/XUxm1xY/acpWLjBn9jnePPPWx9ifsNXhPQyFdRqZnPy65tzzB5lzBvY1Lbds4pxbIDVetuBHc8HZZ5nnXh1jqnD837FTN3Pf6PvMPrt3QUXX59cZn/YyaopDSyHHMwKUXWlpaGzFmmdnKtERXB/nNZ5eQ7gIBVVG0R2M+pwCFbQ624XG9BnG1555BJcocHmWrs/y3dgFOMrlQW5BA6hTXB+vG9TnazfFxfWgHl5DqIfXcerVQ7weQuQVI6RrO4STjGjWIRqvndZHm0I8GtnXcGJ60Xd+URO4+1F8lIcmkKoehMZpKxYuQkGdHb179plHR0JTfBSLHsOq37gFPPLU68DvLNHqbBczdj40iDTIuAImfvA2dGnfBrba5k/4acov+bngSMeVjqdjVbB25SK44uK++BgL1ZPuPMfhLacQz3D+BWX4eT+7d8hvOTw06lZeReDHu/AlEQZff9qsRSd4878T8UwwJYKvbbz+JIj0q6CsZA6ceszBuC+pKWdDu+69Yc5yvBuUXcb9OrcKpNcQLkJBtY0rb9natXAGXm9v06YNtG7dGrbfHv8IMr498rb3+PaEk3x7DR2OstbEF30Hidcabcg3/4l/4hPP2YhcxydZG9Tp2KkzjLzjLlX6EE1TPVQQ7rrrT3qiyV69Ol7qmD0TjjpgL377Fw8zvBzYCM46/1pRDSATbrOmdDkMu+hsvP5bHx59+k1+/WEkT8KShfPg9CMOgjo4D8G5GLa9fNz3+TDoon/iuborPRdMl45wfMywvBSGX3I+nm0Ww/W3P8Bv+HKLOlEIXytfISVD/7691lZ/gLn4Teqee3SCltu2hidffAP7Jz3Gl/ojD8nEarxX4iZeXbBDMrXlPOh52Mkwd6HcKY56NRXw7JP38TPcfH8MtXmse1ZOI7hjFL4YSReXq86b1KCKk15DuAgF1TYbUH/bAel6ciX0wbuyeSm7Vz/35q/IWRr3KBTuT4lPMcRPFC+OpWrUwOxvJsLueI8Qj+G4H7LwVco5ufXhQhzXFq7Ar0qto/41uErywO3X4qd+6fiCd3Tn1IHh193Jl9dS21ltJZRS/fz1p5l7lBaKI7EiJJ4jS3u9yETyINAbiq5AL4i5IjkZV5fC384+ma8dUNLp60TG1IH8uttAn36D4axB58OAswfDwIEDYcDAATBwUH+L97f0gAEDmSb5wIFnw9kDrO6gAQNg8MB+cPE5p8O+XXfgDmZvSqHTxULYomUXeH3MB24prpbCStmVWFCpN8FkdRk8PHI4FOfRbfz50LhpK3jmlXf4o93iQqDYc8oDQjQ0xPeHr14MF53bHw+0RXDvQ8/6PaXjRxY1sGLJAhhy5rF4fYymczRI5MI2bTvD2E+nc13tUESFroQnH77PPtNtTxmx0ebDYUfQ+40XRy4J8+WM0PTxSRc/arB2Eb6d5xSMT2XIgTrNOsHXP67i+N4VIQFBgWr5eb3IpLb4a3GJqueBPfk72PQmquyc7AgnWv3l4JJYDtO43xzf0rhMhhMV4REUfhb6Yz5OeghmKZnVs3yrb3U1Tjq5+FWnfwy7IqpMlIj0CdiI+ntXysY6rYHF8+dCvyN78bfI7TVmmrgVQW/8pCotZqaYSGlQ8PnYV3E5sAEcdWI/XGbGL6U5ZbsfkrBi2UI49/SjoIg/YkLrEXRpBL9edOYQWInfaZQB0McghI1rYPrnH+Az0MXQo9cRMH/5Cq8r4XXBxL62/e91RZGd4JI79tHbrxmG+zUXzscvaq1cjY/QeB2PuJB4wMWl0zuuvZgvTVE7tvXJg4eepgO6KzpVAT+888LTj0DjRvW5vfP7EbDNt9qhK8zAx3jiPx+JkICIayra60UmUnaBXlt0BXqBRfDr2YjggflAfI4Zb/7a8+Az1vsRi6icP0d8VyAqny+jR5wwBPTyqbdffgz+hDnml9dw28JLnScOxLYYsxVSILmqqYSX/nMvFFB/NbQOUwD79joelpTYVkkTsXX+lFhQybtAb+8VPCdARMwhHYGrl7imRqsF7kctyY7JETPg0RKcrCWJ0TqhtSYVxnCDmYhCRmL2AqbarFoyz5xwxNHmv59MQV18FIi6At7mfsRJ/c3ou0aY+nWLcEKRwGKQM1pmQohTciZRkxYiKBrF4aC0lETLUaiQS4zqVebqy4eZa0c+gnz8ZZO8wHTf/1DzyOjbzI6tWyD90+pfXbHW3HvzlWbo8FtMWSLPNG/R1jz46P3mkAP3wqVM8u4q7gAVg36WDJkRjxSqzdQJ483hR55gWu/Uw7z08uOmfh38BAdbp9tUm/lzZpu/D+5nnntzvKlOYi6z803vo07AR5ZuNc0bN+LdKfbz58wy11x7jZk4aQrq5Zh2O3Uz5517nvnzLu3dI1NYmo3e/zUGataYa6+8ylw5YiTWEfdVbhPz6ZeTzS7tWuKSZlTuqK4RM+BtYvt77733zNfTp5vKsnJ+dIfaDEXgLTccpPA/9hibS6ojB8YNNyRa6MWZjcRHG7KnH7GISN/+RGjdZWMzxKsz3CLpsgB+eRWbbo5piI+1dd+tu+nYoT26cjbs3RVDeBLfydYPIl+M4Sbqf0mzctkyc8Ow88wdDz1ryhNUF+oLuebgI08zL784GnsdVy0qk4tfumqpOffMfmbspG/NXaMfNof23AP3o2SESgVm7Zpl5uarhpqbbn/IVKDvBJ7bYI3NUacOMA88cLdpkG+HRMqd/LC3m7Wrl5uz+55uXh37hRl5zyhz4rGHG1ztoVZDBcQCKQMxrBWmqz/uOxw/vp78qTnjtDPMgtU15pkXXjW779qRi5I+/0mTLFtmLhxwurn38bdMDZchx9Rt1Np8/MlY03GHrVQ1kmb+3LnmxutGmA8/mYg1yjI77NjJDBhwtunZYw+Vfyx0VDyugSVDZsD7Wepv92kUEHcOFuSkXj3Nc2MmmD8ffLx5+9UHTZ082l9SRFemzRKfSxIVJ5aU1PrT46GrzF03Xmf+edWtZk11AvtUnmm/U3fz8GMPm+6dd1D7IsolY7jh9l9TaV5+8iFz4hnnmkrimUKTX9zcfPLph6bLjlvatuaKZT3Y7c/T/uL5T1P/4NCtZwnBYRy1HO3ZxNI8LSCnRHueR0ii+JaMtqRXAZM/eRd2brU1n9FSV8RhDP/y4db7n3d38lqLTY2/Yt4M2LfzNjSyUrbxD88SsuvAcf2GQEkZLe3izxfZISHgAqSLL2ZrV5XAVRf153dE08y6+VYd4LUxdMcquhYl9qI3SuBRhzCgFwGUw/VX/A2XDxvDvQ+8gHcaJuzZhNd3/ohmXg1/Eq1H57b48gBqk7RkvzUMv+0hfiRCzlqsFc4T8Q7UpfiikelfTYPp02fAUnwYn56UkjLrMAFPC8gZ0Z6HS2X4icGbr72M+gTmG++ezG4E73wyLXYm5A2UreOFgIu74fHJGG8mwpv+KisqoAIfBanAF0HwH9MWr6woR5mVW70K1Lc8pvFTeOVkhzr0IonycouzP8SJZ+01xFg6BvpIiV9eaV9CIRWi2vlUhBX3bFLRIi0Qe8/zCEkC36uWL4YrzzkZX7KT5c4Eqa/lwb49j4cV5XTeoIOQKf6rKoNnHrwLGuDlpHMuHIbLhvgIXiwEMdaWLoMRQwdDPf9J1Vzc/9lw4F9Pg7n4Wtq4CS1hQwJXbR66B+rh40I44tEAAAMlSURBVCwnnHoOzF20PGjf2kZiMk8LpI6e5xGuPgWuriiFx/CtZ/Xq4k1rQy6HkjV+cd3rBAj2i4oVP0Kvri35ElsOXvqiPJ141qWwpFQuyVgL3mLhSvBFP998/TVMm/Y1fupyKb6wx/W2WHEiKyXwqENCICEspK3XZ5alPc8jVhgjOe/4FMlJPXtg3fDNX/hKTvsRC+XY2WjTn5J/V8oYUN49Gga2VAJWLpsJg087mh9F5eMDrniecfYlsAJfbMNZJsW4D4mGfLrc8PLjo+xqDh1f8P3a2blNYOyE6bxSlGLrfHmX5F7ztIDiEO15HiGJ4lsy2kZ6OAFQP+RHohBXWk5gNaVwXq4deKb4ioSMRaTVRDqZLIOnHhoJW+BjCzSIU0emawD0mMWYj6d7Pe8xjQ8V1qNWjbYJfosQHaT4HJtOu/EvB6//XHPraFzQEc8R9E4EYWfOowUsUWz+XOUV5/XFpRKqA36mcquO8Npbdpmc9LSuuCUm8x1D45aFS074mtIDuneFHTt0g0lffO2XD7UPjzNSA1M+GQOdtm7MlwbozWYNt9gRnn19LFRiCw5ipIlPjZx04nrCWf/+J0sahEujAzNfx6wHr/13UopfHUfjXBXZsMBK1x/fGjlt8WDjxgMgrVka94aEsMBKf+/xK9ashLuv/Ts0KKQDDU6auD/g2+/2ORTmrqR3uusf1TkBM6dNhj12+hM0adYKxn70RZr3npMNToQqSuDu64dCo3xc9ucJGS0bZkHXHkfAjIX4AYp4gpHx3fSpcMDeu+Ez9A3hzfcnQZU7lulS/PT8J2HVimVw8dmn8zXsF18fxy6pPLpIGqd6L5o5GXZqVozLvbQ2Qo/mFOA9Kf8XfrM9NPLFtuxIyFhEWj2kNUvj3hEhrGcTk5rDQNMT1lfkkTFPEpKEPgfth/upAPY7rC+U4Ss5vdh7cYgy/nniR351TI0HRcBH0b79+iPotftOeE2cLnPic8n1toIb7n4CyitSbqdnU+sr8lhTuRYeu/O66DILPpFDj3E+/9bH/hJOpB1EdzvJeYwrxemoaohFQsYi0mohLaz/BwAA///omYQwAAA/kUlEQVTtPQecVsXxc/2Oo0sTDCr2RLH87WLsqEE0GkTFAopiib23GCIW7CWCio1mV7ChRCWiWAiKGhtGqoD0ehxcv/nP7O7szr7vO8QTBOV7v7s3dWd2Z+vbVz5Ad9QKkgJr0croLEoCPYMSeR4biIiERZEJtGKhKlYtxduuOR+Lc7IwC7IQINv877xfF5w2v8xaFmXtx/NW77+2ZhWOGPpPzAZw9tlHLuYVtsUx47/EqkRuuSzWNJ3Fh0DPoESeZ/2XlizGGy46AwuzASGrEFu32xXfePtDrPaKUQLPTXFPEqtJ59pqEtfgIw88hIMeHoolK1YG9chcIGpra/DJR+7EtsX5mEtlhqwc/P1uh+GEz742ZRVNgcGgxWoi/07qlT2SUv5gh3Uq6X8F3nXTtRRrint2LuWjEb76748pHvZQlhxHgC5/UlmlUmgiM2IokThKsIbxT5jQqSJzEbFB+69YuQIH3XY9Ni7Mp7rhvsD9AnCXvQ/FyXNKTXu1paFzbS1y/7y//41YVFiIf79tIJZWurJGRWaiBqsrl+Nj9/TFJgXZmEO2syDH2O/Y6SicNEe1XdLmWq5YtQIfuusmLCbbF17RD0vK2cq6qP8arCgrxRHPDMNbbrsTFy+v0DWZUl+2aJU45rUncbPiAswzZcnHomZb0pgxifJoCqDSRcFQfK0Y68SUTlJ3+WtqanDu3B9w6tRp9D8Vp06ZYvBpjE+jf+FrfOoUnGZoq8vppk+bjN9PnoRd990D87IKcM8Dj8fPJk3FyaQ3hfQnT2FbrO/sTpmG08Q28aeQfAr5Nn6Jz/a/m/ydSTd/4SLkfNry8VlKKtCWNaZ+rPw1+NFbL+CWzRtgblY2jScFuNmWf8CXx9B4kloZ2pj3X1G6BPte2geLsrhd0hxDYzRkNcSnX34Hq6LM1B3/UBZv1vmKDKT1H6VNJicakgo+bgFJGA5k7F4K4OSRkAhHU9/2uNXUitU4/4fvsddxR5oJLZuCBpBH/zl41oXX4UoaxbW2xlkS0UnC0DW4ctl8vPD07tS5ALOzeKKyldKkXUf8YfFKrPHpPBIKnMBijdh/6bIlfmLOgnzssP2h+OHHX1v7UcKIUAVM8I3vmJekItoQtVhdXYEXnN4N86m8OVxWysufT7kQ5yxcjqYNx1H7Ef9xACJ/dcaftSrIbikOuPNG8s8dgeo0qwG++s5/KA+xlUAm+LHrtNGIUiQJR6++/ZFZn84jaTxbVqwR13+ww7qk6ZQ3NP9VFStx6IBbsClNOLxYtZNzFrbfbhf8z5ezXBtx5aWF4afj38b/23kH3L3ToThp2mwpliljFA8qaE11CQ5/6FZsVZhrFoVZtCjkvtxw011w7KffW6POAmnjZx9/gPvs2hG33XE//HLqXGXbqSZA5I+0IzpJONrGnwlu/QwjRUVqfg3WVJXgbX0vx6IcWmSYfpSL+3c+DmcspDHD5Oun+DcJ3En70dlJ8HUSg9fitGnf4a47d6SYcr3Jv11cBdoutAxtxlOWOx0e/1y6fMKLCM+FAuI1R8hhGOR142RL2TG2iebFXVZ2Dh5wcGecMp3q2hXn57d/HtMqcdgD/XCTwhzbZrPycOc9D8YJX30nbrw/GzZyHvmvxcVzp+FfOu9HCxHOK5chH7NymuIb73yGFT86uXurqlbWbv3TxCxH3BCCm4CJpodxkjrZomahPvskhDC/Gid9+R/cbdv2tBKiidM0IBrEcxrhbfcPNspiK5UwnLRsSVNL14jzZn6NB+y1GzXALFepfPVWhMecch6WUYXYTsa5CakECx4cVoeA2aXL7cRclM2doAC32+lI/M+n/yOJTZSaNOb8XP82hzVYtmw2dt5/Zzfo5mJOThFecn1/LJNLVVcUyZeQa8e/tVZrrphLceBd/cxiyAzQNDG/RhNzKHXAONXa9S/5sJbD2fK1VDgbjf+aShw5bAC2bFhEV7UyiGdh2w474vuff0/9gSPBdVONZXSVcd1l52Kbtr/DIc+/zhfQaQ/LpjMtxl5++kFs36jI9GezKKPJIbfp9jjmE5mYrYnKsmXY95qLsUmzFvjPR57Fcr9CDjWR4my1/oO2qPl8qdaVqmU5wStjVbhs4Qw888RjsID6s7nCogVu7/Ouoh2Daj9miC3xl6SD/2RvY804VexfLMXwv599gs2bNqbxxU60PBmmTqCxLOhYvujzoqyI7OSZK0eamM3iw+mYiVfj1k9ky1x5Ov882Zk0gFtutT1O/OJrk3FbJh0FXZ41LX81ltNux7k9jqarXdtes7Lz8fheF+OMuYudH203RDZ4rsLJX3+Mf9x1R7tgdPNMVu4m+NF/Z5j6XJP4J6rMO41Lks5/4PlECWNZVkBVa6LMix9C+OAqjlDLMGclM4qGdtXk0hgbbITXThHPSvSZLtyd/2p4//UR0P3E02DBigpXyznQZotdYdiwx+DQTh1tturrHyvhs/FjoMdp58DkKTPJVjbUZvFwlA2DX3wVehx3KNA2lc96jFqn5rwG/leWLIHbb7gcbr//CSiHAvjdFntSGe6HA/bbRRfd4KH8azP+nEmArz58A447vjtMnrOKqHzIK2oFjzwxAE49oSuV3h7rxj/bpjyY+q8ivALuu/U2uPjamwgnz1lF8MrbY6DLwXuZbrFhtL+1GX9dfsbrPtZ9/Ov2DdQnXnlmMJx97qWwYPlKUrSjXWHDdjDs+ZFw7OH/R5xa+q+CT94fB0ccdTwcdkwPuPfe/tCqWWPWNtXsOrDq68xfSbaHwwVnXwqzV5SZvst9DnJbw4ujR8Nxh3Qkf7xrlwVfTBgDRx/bDXbcqwsMePAe2Lx1S2OSc84t2Y7z7mwZLKKjLv8i+ynjT131XwPfffkRnHPGufDexG+hllzmN2gJN99xJ1xyzomQw42XZyLiMxoOztva8J++/JXlK2H48Cdh2rQZQAsowFoavckdeaUE5gxZtUTT1iBnjrNmziyzSsRhhRrIqqmAl4YOg6kLy6HJ5jtDj+6HQ2FONaUhOZ3ZBJeFvBjTSDZ4rqD5g87OHMm9f9LMyc2HHXfcFboefSQUN6Bpn/TSHRI3agnGptHh/LkEFpVzNaxaPBcOP/ggmPDlNKgidh75uabfvXDpRX2gcRFdznknJEwTf6ytgrE0z5xz1vkwdd4iyinfb8yDgibt4JOJ/4E/bNXChi+tfxda74R9cF6JYaB3Tkh6/1qD8fTlD1N2jCWn/WhGd+uJFB02EdYaPoleWps0OqHgvE9dg3f3vQSL83hVyqXiFpSD+xz8F5w2Z4E3531I0ijndfuvrV6FTz14G91Po6tkG0ZqCXmYndMYv56+IF75ptjWjB8v/0p3xVxA98p5O77Vph1x1Fvvu6WSthVlPhApKprx4/6NI4r7Y/f8DQuprNl8f49Ww83a/x9+NXl2XNbgNWDaneFqxpr4p0SSpJa2srEU77rF3WM2zw00pHvMEygfohRcp7hLYfxU/+RD3BgoRMKnJlNUNKMe/sX2Bue/Ct9+9RncplVTdcVM27V5zfGJF95xYavC8lULsMsh++BmW2yHo+hZiUreWtIhkfIR07NrV+F7b7+AO2/W0uxQ0YhJ/ZruCUJDHDJinNOswpVL5uBxRx6ELdpsjkNefAsreDfHGxHDmvHLxr+WtvDffGUotmtWjNm0Pctj0u+22RlHv/t5ajZdqUyuJcurHf+kfAkoaT1bM9ZF+Wuxx+EHma3sg48+Cysq+LkQfaxr/9oX4dqdETGD/yvxm/Gjces2TczVLl/ZFxS1wIeHvGD25VjHJxUkEf/q8lJ84Oar6bYE397j8Zl3TQvwqG69cf5y9xSQpDW++aQZ6yL+3pFBeMXjDuvYnF0efFY8IqqKYVBL81lJIkr4Ap0lBWhLqLoMu3Xe29z/5U5MqxC6T1GIJ/S6FBcvKQmTSSLQ4nX1/mtpK24x/v2Sc2yFmhv+3MnycfcDuuKSMr7L5Sy5TPq8esRldw38l5Yswb9fdLrb+qKJue2O+Pqb74cyqJJb1DoxZ+fPu/XImvq3CarKl+LFpx9rthHt/b0G2LHTn3EZ7dnLln3Ihk1jzs6fAxJepeolUcCYqySK4odrVuDdt15P8ebBmTtDMb7x3ucqHzalOTsj3pZHXBbWIP5WMyQUTKCzpICVmLNT8roe+S36r8Gxo17AbVs1s/1N6ie3Ad4/+BVTn7VVq3DooLuxID8Pz7v4Oly8tCTETQWM0ThUK3HC+6/hnlu3s89zmHrn5xyKcODw0fbBsuqV+MwTAzAnOxt79L4QZ85b5I14Wx5ZP/GvKl+Fj9NDbE3pnqbdxs7GvQ7sgl9Nm+m2820G+RxnNVCCCQwBFMxKzNkpeV2PJMtPAiOzCny2WGyTKeELFA0L+ZHXCjz+0APMxLz/4afhispKnyYgLtV67X9V+MQ9f8fGBXRbzkyq2dh220740cRJJptx+QIl0zVzli+ehxf1OhH5vrpZKNIckwWF+I/bH8IKVnDJfGqP/HLlt1vZNFqaK3MeNv0lOg+hxHB7M17kEZLrI8lPSwvTwRjAoln/gz8ddjB8+r85QJsrZJ3CVdwM+t52H1xw5gnQoMA8VxznUfIgpuukERbMmwYX9TkLXhj1DlRn0eNftEvDdXv5DbdCv76X0xTti2ut/IzyV6wsgftv/RvccOs/obw2B1pvuh0MHvIQHH5YJypVXHCfdY+4QvwM/1x3s6Z/A3858iD49LuF1ALzqbgFcOkNN8JtfS+izRs7xHCYvVuPrA3/ZMO3pUpyUgV33XoTXH5df2s8uxl88OlnsPfOmxs1o7rO/AMsWrgI5sz9ASqraFud4yoF55trRP4U/9SLQ9FcqLiLW6bd+rNBZavsiwHLnUwiziLe1YccKC4uhrZt20Ljxg2Nujn9rPonC+xeDlNkcyKOgwbUwMT334Zzep8JE7+bTTJunZQpOq69ZSDceE0fmPrVROjZqxf8sKQKnnnuBdh79x2dhlGzJzEtLEOXwzefjYfzep0F738xlfo0z/20bUixuPn+wXD5eSfDnOlfQ6/TesGk2atgyJAhcNiBe9DWsDPyi5TfR0Mhyj/luWT5UvgbbZMOGj6S+jJVWW4enHTmRXB7v2uhbYumqym/BIOgiYcEycEYpPe/2vFX7In9pL8kLfqxYzSjbTWccMihMPLfH8PeR5wIb7z8CBTn0Xi7Wv9J+z9G1+XfNVURi5mU+icFGkeuPa8n3PHQMzSe2Y32TkeeCs8PHwRtmhdIyjSxJBFfD1BFzJz6DZx87LHw8ZdToIpuZdbSLb4WbTrAo48/CsccsbeyQfrrq/xuDUCrhOQywUpksSBir68Q0fFLDSUT1OgERcdWDHLw2vNPYJsGOWZLzT58UICtNtsaX3n7PZS3MsSehsFKwLSc8Vosx0mfvo9btWjorph5+4Kn4mIc/Cw/yOLSJkwIKeKkXWtbuKJND5FVlOLD/a/BBrRdwq8otWi1Ob406t/mFSXRjuIlDoIJoyakiEPagIlOsMecShz7rxfMlYp9up1W+9nN8M0Pv6QrFdEMKZP1X1tThatWlGCle3/sp/kPebOeqrCmbCFeecEZtmdQD8kq2hw/m7wwXDGLA5UltiKkiLVlwUUnaIvEwvLycuzduzdu2mZTbN2mDbZq1QpbtmxFsKWDhLcknGiGhk86lid6zA88m97J2I77DzadHWOP/bFtgmzD+7X+W7dsg9tstR3ed9+A8PShK4KU7eeUn00ZO2LM2bZceuDy03fx4F234qWD+Tdbe7SjdO2N9+LK0hLs3/caulrOx/73DqKHBlN3W4LZgFkXq/D77ybiUZ12MQ8fytUmbwX3vXMQlpbSq5E332CuxM+5tC+u5MaWMCHkzy8/WRJjUfkdIQ4iHb7OqsA5Myfh3r/f2u085WFhgyZ4+8ChuLLcPkEZkgQsuKikh5VWYrkrm00hUqWf1n/IsoglpYbBSsC0nHEjSRELg3JFtx1OPKQT7VYW4n5H9MIVFbbji8a69c8ZdJ7EoSuAkFbMVDlefVY3+1aNmWlzsFvvK5E2PNmIS5UKgrgM3xn1LDan25m0ADRjMz+c++duZ+D3cxemmBCLkr1UyzqJaKdqBf9apvTFgWPR6jXNofSNVNMRrglrJ5WT3r5sLVgpb6xW4FXn9zTbE/w4lrmUpUayQ8f9cfwnXwYj2oEUJkjrqBpOtApHvziMtpbd4MPb5DQxb77t/+G4ify0tDq0D2ZrOsI1YdN7Tk0FPv3o7aYB8NZtk+atcSg9ySpbJXH5le+kvyTtHZCgzvLTbYGK5fgw3V82S0RuwFTWZu13NVv21WRjdf5r6b3DcWPfxN132wVHvPpW/F7fGvlPlIeWIzV0j/KSc041T2vyAN1up4Nx2oKqKLQ+lfZRr/J7SwYpKyvDLkd1wXyaXPJpOzY/Lx/zGDKdl4d5js4zcsdzOOvlkY6kYV1vg9M6e2zH6LDc2GQZ+XE0+4rSKv8sK2pQjP363WozvpbLH0eDqKj+6QnVLz7AznvvYCdl80oTLVipf5x/+Q34wYcf4Y7bb4+77nEgLlheqV4nZDvJjBrTyl0Fzp72BR71x938ayl0xUxtMg/PvfxvOGH8OHrdZ0fcZqe9cfo8eX3P5k8ZiY1qlz/qP7ISiKj81rw2Gxolc8vwI1rgdmjV2G3H52Lz1tviiDfGUiyiVMZ+krN4/nTsdtzROPDR4VhW7to7Ka2u/wX/LsvaaIRrwuqmcpwNDUgp9s+TcBn2OHR/2t6lifnwnmYr2yfRRiNcE1Y7leOtBISUYv9BZLCkEU/TPEELiGvPPI4mZn4Oif5zG+JN9w33FzxeNWHSkuy4FK/oQ9vYdMFkbqll5WLDJm1xwBPPYZl5uIE0k0Y0HeGa8B7Seo6YlOzHym+2sv21O3UZ7p2yk2T3+DzF06US2iHfKEs6tfVgbIq+gXwyMy5PE8aOBZa/askcOOkvXeGVsZ+ZbTKznZZVAN1OOxfuu+MG2LRlk5/hn3xkrYR+V10O/7j9YZ716WlszkMOHHF0Nxg48H7YctOWxv7aKL8pET3t+voLQ+C8sy6G2cvLaUu+Mdx894PQp9fx0CCPlh7k35afo+IOYqyxf441G+FySDoV/2ULZsLFZ58Bw14aQ1s17K8AevT5Kzz+0O3mTcW6/SOsKlkKd/TvB7fc/k94/JlX4cRuRwJtaqVmWAqQxj+r+wJiNZQvmwO9e58BT40cQ66zYb+jToVnnn4CNqWdW/8cpdgxiTk9MdxWkmGJPy8npI7ya/2amlp4/fVR8O23k2grm540ZTt0cKzZBwMxZOLv5MzllYToE+XSuGyZ/NCyW/JlbBHNCZyNLM6/kdOJUXIg9sQ/P9VbXNwU9t5nb9h9z91cftg5p7G547z4eBpC0aRm812HPrHFKZfW+JdsQTUsmj0FLjznXHh21FizPUhXzJSgGk7t1Qdyc3PgX2+8Dg88+Dgcc9RB9uncNfZfCSX0BG2fnqfCiNfHQQ35rDWFzoKT6QnnxoU18NRzr8JNd9wDf6V+YcJGOr7Ev0D5aXEB2dl2657ja+Loy1dj4tb/mguh350PA10gm7c4dtq/Kzz5yH2w0za/s5mVdFF+mYkw/LH74ewLr4Nr6anhyy44HQry+UVNro/U6kz1T4rrvP65sNX0XwMnH3oYPP/vCbDHYSfCm6/GW9mstW7an7FsTxJHYUXxZP9877ECrj6zB9zz2Ev0ngAt8Rq0huEvvgLHHrEn7cPwetK3npBfE28+Icyf8RUctv/+8M3sEmqPfAmIcPARJ8CDA+6Ebbdsm6j/Xyr+rsC6/GYmT534E4sGpaDQaBVQH0JsEfxiwlj6kksR9wvzz9td2dkN8Op/3IurqsP3shIZWyOv9Eo6XZAvxj923MJuqZmrZVptZedhn0voYZblK1PMStasA0UpdPXOK/GTD97AXbbazGyZ5OYV4gVX3Y5LS9QXj8SWQGUwZjFlOSuXL8XXX30ZBw8ZjM88PwInz5htUsX6NfjtFx/hzjvY7Te6KUZlbYB3PjCMNrjVWk0SCTSWanDyN5/ggZ32xNZ0G2HiV9Npu1kpKFRl90fQGpz5v4l01dHQ9g66Yj73qltwKd2fMObS2IxZilLojziNxZSukp40LS8voyuXcgPL6Ura4GXlBMscj2VMB1hWwTLHI91yo29pb8vpsx7Lzb+yUWHwCmOHdQzt/VfQLQO6mlJXYHExFaXQuID1oIytalw6dyqecuxh5hYSfzvAbjnnYLPmrbC4yWZ4+jmX4cLFJXXWVd2eK3HFktnYo+sBSHf/yD5NSfxQBz3ZvEnLTbFBg+Z4dPcz6I2LxbqFGXNxMRWl0Lr9/rhkFX3xbNw7b+OwoUPxySefw4n//Zq2m8M4Y93QyLFyIf754F1NH84y3yTIwi4nn49zlpaZ2zDJ7ASaPma0fB4efiD1o7Zb4Ei6wq5KPnEpygJVtmOWohSq1OuHii3zRcEyPIm2svP9VrZ+KlsUyY1C6+dUpRJbAtOILEsU+Iq5DG+8qJd7qBYwv3hTfPHNj/0VszKRQCltTTnec+PlWJzrvmHBc0whvX3wFO1kJipHPMb+iYoFCR8/kRRbAlVyWjCkO9JopmGZlIavhB71iN/xCpxk+Wrw2SEDzITMz+Lw0iaLHmHPyWuFjz/1is2gTqyzbPhK6FGPkH/6dN3kz7Fdsf1SjB0gsmkLYxPawngeV5l3P1KMakYyw0GWxr/1XIFz6WMmR+6/u3sNJRe7/OVMnDMv/UvwwSBjIe+eT4P2soXz8JpLzsUGRUXmnl8Bfbrw2B5n48Ll7lOlJil9CrGqDF8cPhBbNC02twaAtmuy6CMqw54f7QeTNB6M3+rKMvqE513YpNh+FnEV3VO0RUyfwgl9NkPWgz6/bjLx/dFmO5Ouj2lwzsd7HqOto6AS0hssjSANK6gqoUc9sgbtL+E+FCIIgrnAY8zwldCjHtnA/dMHG1bMxb/27GYWrXStbPqhnZxzccttOuK/xtJrbf6DH3HxV1/+KqwoXYjnntIVC2hCtp/ltBMzt8lNWm2Hz7z0lvqYiNgOsRNOuioxMqOq9D3qkZT4V9PbH48+fB922LwdFhQUIPejgw45Er/4dqp5/sJWKqevxU/fex23bd3A3SPna90s7HXB9bic7kkZD+Fks+PdVuKrLzyBhYUFeNCR3fD7eUucbVsir2bJxDmNNA3LOuSzEnrUIynlT6Qgku98l5l7zDwx70tb2aXJ16WCOU4eDsNXQo96ZA38B3MWC2m9xLD4VI2D770RG+fzLZEsLGq0GY586xPzqpRXMYmCDbvgrcWPP3gbt2vbyIzHZoFIj/v2OOsKXLLSPUgTO/NUwlwafvAVqiLwZL0dOFGNxfacM5qYqYGpFAr1CYQnkAWCC/TKKYjW0Lgo1tJDRsvwH1dfaCdkvpo1k3M2btphd/xy0kz/gJC61vuJ/mvx6cfuxsZ5/D4vv2/PHSwbt+iwLb497j/RSjZ9Dm1etUxwgVKaAOnqaMU8PPXYQ+zrX9SIOh12DE6bNTeoGGzN4l9TXY4P3XcrFuTRgoXjw2Xgwa5JB/ON72CUvlK0eCb+9YwT6SqFB0PWzcWixu1wzAdf+Ik56Gv/tThz8le0yt8Ht/3DnvjRZ1NcDhno6P/E+qdV7qjnBtmrDop7fnE7fGnM+PDeoQqiQn0WhSfQ5saKNc8niBCtoXFR0uUP5RIpQ0klsC6eThPwdKmC1MRVqSjUKwlPIAsEF+iVUxCtoXFRrMLqssV46dmnmIci+XOT9t4bPYRJn2W87LrbaUdplVHWqQUXKNZiSFegFUvx6r+eTBMz2PvMvNNPV8z8WdbuZ1yG85es+MXHn/9+/C62adbAjjdmrKHdgZbtcfiIt9TX/6rpW98r8OarzqO40JhE+baDeR7+7dYH6YkYe+jyB7yW7q1/jd27HkQTcwN88sXRZvER9yCJ1IbQ/nhiKrcPf9E95n070z1mNTFLuQRyzgUXKKVJhVpD46L5U8rP6avxs3GjcUt6j9nME7mN8I6Hno93AsW0gdbnskVz8YrzTqPX8vhqOZt2Y+mb4H88CqfMXvKLt78oezyuqrAIqq6YHcsAEbMJxU+RBbGoxU4VpcwYLtGOhTOmfYd/om0UO2naK2aeOA84ohuWlrn36ViZ0+hSsCExItAY16ca5O8Bn3j0IWaSslvlduW7+74H4aTJ042y3641drQxhzNIkQlPQWONT7Q1WVOKt11/oRmU+Apk+13/iOO/mmY1yJYx5/UdZYCWWHzponl4YW/6xjc9vJZHE3I2/xAEbcU0brsTvk/v8FljvFdWgRPeew2369COBkBqgDTw8AM3DZttjmPH07e6xR+Z1V64wS8jHxed2RObNCrGW+4aSE/Jqo1vVjYJ4lTeSIItbjhRdfkSvLffZWbg5zjstg9dnUyem9h+cgZSfCh+ikzypGBwHGPKjBEQ7VhOTytoieIbVMsoqZACY6+BcnKvRojHjZZW0BLFN6iWrQ3/NHlWLcerLzzDTszcXsxklY3b7LgXLc6+tW2G3f5k/zToVJZgv6vOMYtEvhrnqxx+6LJp6w747/H/NddqNkiuXCk+FD9FJnlS0BpLPYsZ2j0bdO/N5tvxZizghQh90rFl223x2VffwUp+MtIUtAK/mvgu7rXLdmZSNv2I8s597qY7Hw35dnZ1ZZYsXYC3/uMq2qovxNP7XIaLaEdLNhxEnfU9bnLrKAO0RPFTZC6rnF4nMfYSJ2XGSIgOSezDXycezE9lF9DETO8x+4lZJTRoSCV2Iphw60llRvRjS1pBSxTfoPRp2OWL8cRjOtNYyG0pF8+6uC+uoG8M61TeLyEVdCtq6KAB2G6Tpqb+srOLcJ9OR+K7H36q1JSfyJLiGzThRUiBymKEOrlXI8TjRlEr0LflosSKSM5/bEUb0nLLV1JBDaTOqexaNHDKVpXis08NwXZtWpkVdS7/qpR5MjQPTzvzQvsVIE5ESUIqwhVh0RSGSVBDW7PvvPkKtm3e0Aw4tjPawee4k8/EmXPoEfk0h7ZvxGReeVgD/5ygEj/41/NmG4/v3TXcZBsc/vI4MqctpXHOGgmVhXNnYZ/TuprGmGuu+OnXUGgr5rhTzsHFpfQzPMZmLZYu+QGP73IItm7dBs8+7wJs14K/pcuDTyEOGjLCTcwJ45R28YI5eNUF59CvBhXgab3Pw2kz6UcEuAiSPY0n8md1vGZIZFhVWLJ4Fp7YZS979U63KE4641JcsKzU5VgcxDBZflZWHqL4WL6SCmrg6ttf7DVQG49/XqqV461/u9R8IY4/6s9fieOnXe8a8DQ9qeqWchRLCStHScfH8pVUUFaiX3N79P5+2IAGUTsx21sZ1/V/kH49Sn3EIoTeYNq+ZdTHP6dM1D/1ybtuutL+SA4tXPkJcb610umQP+PXk+mDIcYZ4qL5s/Hy82mxkpODp/bqjfvusSvhdpfqtD5X4pISuWaWfNmUtbSzNeDeO7BZsybY9ei/4NTv54XFsLMtKTyZBlln5Te+pJTimLey5YqZJmZ+KttPzE6HkuhUOn+u5GIsKBpBIv5GS1sKyTSm7UsS8VNTXYVPD34Em9EFBC/0OvxhX/z4c/s9btENsBpfee4p3H7ztjT+88KqEHfZ/WB8d9zH5rkO7VPjdfu3Wlou+fLppXgG1q/8dmImAynJtWfvMSAp+kFEWJQzK6FXcKbST4GNGzcOx380Ht979z185qmnsEf3btiwAf2SCQ0IPOHwlhFvdXHAm7fpgLfecR+Oev1fJt37H36AEyZ+jkuWLk/NL3mZOXUy/ueD9/Fjsj+eXvN4eeRIvP6aK3Dztq3MpGC3dXnVbu9PNGvRBq+87h/4yqujcOzYd/H9DyndxxNx6fIVVAQpQ1QwT6xZ+atxyayvcctGfOXKW4RFeO3ND5rVs7XufBBIsZfwv7JkKV57cU8zMfNXkiC7ER7e5WSc/sNSH+3Fc2fiCUcfgbm5eXj6mefj3Hnz8brLz6Py8kIkF3ffuzPOXlAaqodTkp/ZMyZjzx4nmEGo8xF/xsnT+YtGLm++xDGSkt9ILGkZVuKsaf/F3bbe3NRBdkFj2gocRO+A8sMlTo9Air215l9lTGfLsDP+TR3QJ1PvvJGu8Kid8I/H8DcEDjjieFxckrz3FmKZUl9BRJgKNN3GGDzgZmxAfdtMzAT3PLArTp3HD5Otj/hX4YjhD9PnGOk2D483tGD9XYddcPTYj+gzozY/ZfSDGrf843ospmc5/q9TZ/x26gwc9ugAbEo/9MF9qV2Hnem++4eSe1/yJbTj1K/v9dT/cvDAgw7Dz7/41ssMosJiBeuj/ORZ8iFILb/G5d5j5q3sw3vRPeaw8IgLITn3RtKIRSbQqQgp0Pv/Kf2fE9fi/Lmz8dyzemFeLt3ao/Huwsuux4WLlkSOKunHLp4Z9jg2Lchzu6W5uMNO++Gkb2e58c1lhIBvi1Ka9Tz+0BPokpMANcsM0JJ/EQjkJCKT5CIT6PjTpk3FfffewzRsM1GYTkHf5KVJmINr/s27oHnUsOm3gynYuXQ/1WzH0kDBEzdP1o2aNMMRI1/zW0PBfy0eeuD+1j5P7qSbwz/RRv+5ZJ//88iefdfUvpuaQzz+9i29LuFWU4CFRfQd32HPSmlsBUoZpUwCWUtkkkJkBtJnPsv54Zc/mbzz6rx7z4twuftFGqciKT0UE8wI8a/B98aMwh14gqP8Fjdsifc+8AR+/e10/GbSJBw54hk8ZP+96WGWIjy2e09qfFONvYU/TMNzzziF7nXxFTYNGIcejS+9+gZ+S2k++uAdfOC+O3CP3TpicYOGeGrPs3E6XSnzNZLkIfjnvBiTQcik44lIaIG1tSvxvTdfxPYt6ZOPPKjRt5ZHjP7A3Nc3aXxCZzuYNIyf61+spnFjs5hGoFkbh/9aHDFsEG5CV7W8eG212Xb4wqixayn+5Tj6pSHYiCZC3g4uonY7cMjzoY3pYLvK0qx1Ef8530/C446iZz9oPOAx5a8XX42ffD4Jv/vfZBwzehSee2YvM04ceEgXHP+pvU20kK6gb7j6EmzahHbeaDzaeff98Ynhz5p+9smEj3HQQwPxwP33pYcyC7HzkV3xk0/peQ7fYXw3keZoY8tnXVgn1ax1UX7nRrV/e4/5BPOBkQLcp/Pp/uGvX8a/5MjCNS3/5O8m4cknnWAe4Cuk7wCcdc4F+N774814OOrlkXhenzNMHWfn5OJm7bfCK6/pi/MX0UWXO4wf7UzzBec6dDq+OnUakUVGidA6qaTRNioJPRYwK36Pmee+8BYhj6OK1jix+WAWH7Q/HHCr58805bF8/vx5cFv//vDN199AQVERFBUV0jt9/J8LNGkAfaiBfo0kl964zQNq0FBZU20+n0ivnQD9pjDQ6w1QunIVtGu3BVx66SXw+99vR12KndPJ+b/5xr4w7sOPoKiwAeTnF0CDonwoyMsnX/n0/FO++bWTvJxsqKqupfcpa4EekQd6dQUqKyuhfNUKWFG6Eho23gSuuOJK2HP3nVyh2DgftkQWd2fjn3Dn33BNpswqgtikgBUweuTT0KXbGeQzF9psviO89sYo2G2HtiZZZC/Fh/ZJeE0VvDd2DH3aczhM/O9XsKykHFps0pTew6yBhfPmQctW7aDrcX+Bk07oDltvtQXZp9+CoWTzfphl0owa/RZMmzLFlHvbrdvDgrmzCK+FHTvuBl2PORaO79YNWrVsprKU8J/MMYv5qLP8NVBVthwevLs/XHvjPbCyKgv22PcgeOTRB6Hj9h2S1siQ9seGNa1xltHxo/5JwbU/m8Cdk+9HemHSh6Y1LnYcrLP8vwb/tlyjXhgKPU/uCSU1eXD62RfDTTdfDy2bNqIC/tz2XwUfvfcGHN75OFhJn6Y9/qTTaRy4Cdq3pV+P8nEXJBljTWvc6TOLj58Uf7p9V1sN33z5BQwePAze++BDmD1vMTSjsjYozIP5s2ZD4+Yt4dDOR8Bpp58OHXfcHuiBIXKCsHjhAnh55Isw/KnnYPrMmbBsxSrYtkMHWE7v/a9Yvhy22GIrOKJLVzjhhG6wzdbcvvlWIV0iJAu6wbU/el+b3mU+4dBDYOSYibBnZ36PeRA0oLH559c/FX4dlv/776fDoEcfo+8UjIY5P8yFRk0aQ3Oqy1kzZ9B4nwdt2/0OdtllNziu2/Gw3z57QaOGvC9EDSelUqioZkDRmdVtTuOsS0e92h+n+3H/9lvZpBhezNbenHOebKiB8RjHh2SdNWUKMoKUglkus2uoMyxZvATovU/Iy8mlV2vp5QkKTjY1ev5+bi5NmPxREbqQNlZo0WDeJ6dlJ00vNVBDk2lldRXk5xXAJi2a00995TpvNhfsaemSRbCydJWZ4Gk73HQotm19kB/6kAA9kW3iwrGh61kzQdPzIOSrxnx8gj820KJVS5rQ6VvafBjzdKp3+WtgwazJcPCBh8DX0+ZTHnJg4GMvwJm9utLmMtmnYHJZ1yz+nA+EhTRAzJ79A3z7v+9g3tw5ZCILWrVuAzvv3BG23LIDfayCGp85xAFAaekKmDd/AX1k41uYMeN7oHd4aVGUAx223oYWOX+g7zRvCvTqCKXiNHy4WjYm2G996r8a5s+ZAZf/9Xx47uV/QXV2IfQ+/xq45e+XQotm9GURyd4alz/KWUr9e4NWLZxNkeiU7IwZ/6791cKnH/4bTu1xCjRv/3u45bb+NIjtTt+s5mtcbgp0qlf9cxVUw/+++hS6d+sORc3awY0394dD/rgf9VGxTabXQ/3T9g8sX7YMfpgzByZPngKzZ80y/b9xw4aw02670sS6FU3WzWwvUOWvpPFr9pzZMGP69/A/WuSWrSgxC/72m28Jv99hO2jfvj00oJ84lEPCZwK5wbY/+uhOVjV0pw+MjBwzAfY8/AR4+9XHoIjHQClAveufIsE2+LSOyr98eQnM+H4W1ck0Ghdn08XWKqqDYti8/eaw1dZbwyabtIBNmlNdmgWW1IzL108af21aNzLa0LizlZhgKQcOrUf57RWzsWeNepRtSg5SXbkKCwLjW5JERNARLEXsnbppPkVBUipo0gQ6ShIRQUewFLGxZU6hDbHyzyi/sebTV8L1V1wMt9DXg3jxccCfesCIZ5+A5g1pcOIj4Z/HAXP49I72gL6AQxuD9gtTPC2zCUokDZ9R1k2TnsZAq0tnWgbR2aa32w9rM/50tZ5VCRPH/RtOPqknTP5hMRQUtoBHn3wSTjzuMPKtssflNYVem/61A8LVwe748OHZUPy7we/H658yb/JsimFOUZkiIugIliImRsWqJfDJx59Bm/bb0uJuM9M2QoAkpYJsxAfQZoelhuUdMEJfy6oqh4kTP6Mr0dY0UG7lrkBVcmPLGvSoN8ZImsOqewGTfMT+LS95trq+5ZuFuv3hYtsfXK9UGUxaIJqMpOt/a+5fmTdlMSdbrTaDSiG9/x+Pf5p0NutG4KuPLlCAJ+bOh8GItyfAvocdT1fMT0ChuWJOb2Ntt7817f82Sq4QbszgHErITGVSPzIcqkg3urGKP0Q3lJ9EG9j4k8X75jKe25z7ohOpiuDYWsr6PNDzVS6HIFlZLDcHibkRhyCJFZpOKIi/ff8IY954Gk4/4xyYPb8CcvIawdsfjINOe2xvHgn7bZa/FqpryuHh2/rBlTfcBqvoG6j70af+hj1Onz9t15yahe08G0f9b+jtX/oj99YkzrzQt7XUaBNj9f2fFMyCY2Pu/xt6/fOvntXCXzofCi+9/QHs3/l4eP2VwbSV7XYNf1b9k2nTBH75+ueLFTPnrCf/3D/MUQ//dMVM70xR9ukkiwZnLQAjC2TdmCgKdJoRyQQf8XLlN+6fvkc8dyZcd/UVMHj4S1BNbwtedv1N8PfrLobiAtrS/03GvwaWL5oNx3c9CsaM/wZyi9rAwEcHQa8Tu5jvLYfFiG0d5hw1FNtM+FwHOygIJooCHT8imeBjo2p/ieImys+La8uioPi4WB6fo/gFdiomigJd4mhR7nwFP1bZnHU6Zb0OttJwqCgKTLANyTI+fDmtsjkn0lnFn1l+MhKZZYKPDco/5bG2Ck46pis899oY+NOx9DOKzzwCRe6KOcq/zX36sygKdFoRyQQfG1T5N7z5Lzz8JdETaMNnz/ay2LQw3iXwMWVpOn2V1ovJBl+a15lWFAUqG+6y/Bfzn1pGyhTPJARSZZTRqFA64yE8NXT1+ObrI+CsMy+AHxaUwC57HAhDhj4CO22/hU0u5Raozfzq4s+FqIEXhj8MvftcBCVluXBw1xPh+acfhGbFRSFcuqyCC/xVlz9k3hfnV9T+QwW5cvzq2l8m/hKBNW9/tXBP/5vhpdffgaNPOAUuOrcnPfdDz+jzkan/nz3+cwx/yvyXZR7NtuGPz6YymJV+1vEVrlMxM3mkTe5SE0iZ6CT9b84/QumSeXD1pefCgKEv01PozeGO+wbC2b27QyG9PJpy/GrLbxtBZcl8OLLzIfDOhElQ3GJLeO755+HIA3dzxdwY61/XcKb8ZlzZqPr/hl//9HEUWFVWA/n01kw+/VixHfrTjE1UFNeCdaEsM+bUMX1k2v+PtX+/lW3imSbaMSsE1FRaLFRVYgUiFugVIoYiFCq6MctRwhQoyh5agYgFJsSOVFKFim7McpQwBYqyh1YgYgvtfZyJH46F7kcfC9OXVMDu+3aGYUMHwjYd2puHodK1drFhTTtKmAK9X0GsQMQCRRr7UVKFim7McpQwBYqyg/Rdbxjy8D/h/IuvpJ9mawCnnX0R3H7zNdC8SSPb16N0ilComIxZjhKmQFH20ApELDAhdqSSKlR0Y5ajhClQlD20AhELTIgdqaQKFd2Y5ShhChRlD61AxAITYkcqqUJFN2Y5SpgCRdlDKxCxwITYkUqqUNGNWY4SpkBR9tAKRCwwIXakkipUdGOWo4QpUJQ9tAIRC0yIHamkChXdmOUoYQoUZQ+tQMQCE2JHKimhfKEkB6NKGihhCpQEHlqBiAUmxI5UUoWKbsxylDAFirKHViBigQmxI5VUoaIbsxwlTIGi7KEViFhgQuxIJVVoePhLmEnorQXEqjhF0Q/iRGWyQCkZNNDmwlC3ABEJVHYFtSKnkEYvlaU4Bg30L+ufnn6k97P/eUdfuPz626A6qxCu6Xsn3HjdeSZE7nVJu8oMWZRie2hFTiGNXipLcQwa6LVf/lr49ouJ8Nc+Z8HYCV/CVtvvSe8ZDoBO++wGZmNgnfuPWpuLWSivbYqBXvvlz/gP0ZUmqzgGDXQm/tTdN7Dxz0zOVEXhORCpRwtt7bk6DFXplVJZimPQQGfqP339h3vMHNYoaEzw4VqN7DmHmMbqfvJNKlgrcvZSg3jKiiMeE3z81vzXQumyhXDuGb3g6ZGjoWGrHeD5F5+CwzrtYuLvn3A18fy1lb8WSpYugXtv+Tv0v/9RqKrJhlvvfgDOO7snNCiwT3j6Go/q2lR0pv1FMfmttn/boxODh20AmfJTcEwQKB7ro/7JJw+3MuaaWvmtjb+/jvaXxd//Co3B1ETakzSXWChcgjJxK4Xw4QD/4LqVanVjQuyoxAk0vYZwCf6K/NMvT9EHHd6D7t1PgOlzlsM+hx0LTw17DNq3dlu9ibIzKSWNRcLdEMrPOauFd994AXqffSFM/WERdDn+LBg86D5o0SSf8q8e/NPZNUWQcsSl01R6DeES/BXVP1emz64pgpRDlzjG02sIl6A3GNJl+h+1Onc5mml/mf5n1hzcPXR3MV1I+lHoO0ksvYZwCa7l/meumMW8Gf05R6YEnpvMo6WVWFDZlhDoE3oFse0lBhHxxuO/FuinKOHJxx+Fy668HpaVVcPFV/eFqy87334G8VcXfx72amH2d19Dn7PPgTfGjodd9zkUhg8fAjt02DSs+3xFb+z1nym/joBvFozw8atr/+4ak/LP64DM+GfjYCuTzlLBAr3AIp7NCB+Z+qcx01wx23jw2QbJh8oIIl5Kqwtp02PBlsHoFC0ugjjVlzNoVZzib8Y/bWkvXwL/vPsuuOGm/tCw+e/oU4h3wOknHweFBfobtYk62SDLXwsL5syAK84/D4aNfBM67tYJ7rrvTjio057Ux8K1iqlBOmXq34093L5ds3ZNPdP/KBDurVIJSRyTDbL9+6ymQUIFZ9o/1W2m/6/R+BffY9ajhG9PDomBaYDSR4zI67u2yTQf6VY/SV2rSWcl8KhDYmBS/Pr90ytUJQvh1n43Qv+7B0L7rXaGu++5D47t8kcVirjgPiwcLSLMCp1wf2vKRIaFDvkF4r900Ty44x/Xwx0DH4N2W2wPDzz0EHQ+qBO9ckHvQf4C/k1MdGC46L9g+TP+12/7y8Q/E//1Of6ti/YXT8zkQV/NGIdufI+AyolMDl5eRyLLDkJlwiflwXRj9F9ZXgI33Xgj3HXvAMhusCmMfOkFOGi/Xc0PbtirBw6RmeFsrFTw1nf8ly9bBA/c2d885JVf2BSGPPkU/OnIg+lXwuLD1rw9s0QVISgSc2Osfx+ATPkz9U/dXHp66C2+hVhEdZ713f8z/qm+pMK4duqoNMsOQlWFoXKJKeOfn5hDkqBnMXlwS6Xyyh5JZEjxk+Z8zmOdmNKJNg7/y5YuhqFDhsCQYcNh0y12gqGDH4FmjejnKunerfzchK81HyyPrKf418LnE96Dyy6/Dpo23xQuvuIK2G+/veidbJUvXZUGF5lAqxBTOtHGUf+Z8ocJSdc+N2x7M4QiJKOWD5ZH1lP7z/j3taZCkaiMuDr92BAl8NyEMpEbZ/3TxGy/le0D4uPlES9KIrGGBNBpRUIiXKcyKyxW8auMSFHVaYKfdE50rPFr94/0m9Ol8O2kb2HBsjLY/4/7QvHqft1lvZef400/QfnDDPr5u5nQfsut6CfvfldHtWbqP9P+7aVgpv+7oS8z/rkRPR7Fw6Ce4P/mx/84HP6Kue5pLjHh6QDVEbskW2gL9bluY8FrwLS2wcVwQpBkC609Cy8kjTnBa8CCrsPiJF6cZAttoT77JIQ4Po1cvFSin4UmDv1eNZ3D41Na3ydJMMVSYFvLwl+9f0kVvAZMZBbyT0/yRjtd0dNPUPJvXssh/pK09pzUkdyFNFLquvyTZqoRkzzJFtpCfRZvDEXL8oLXgGltg8dJvDjJFtpCffZJCBEtywteA6a1DR4n8eIkW2gL9dknIUS0LC94DZjWNnicxIuTbKEt1GefhBDRsrzgNWBa2+BxEi9OsoW2UJ99EkJEy/KC14BpbYPHSbw4yRbaQn32SQgRLcsLXgOmtQ0eJ/HiJFtoC/XZJyFEtCwveA2Y1jZ4nMSLk2yhLdRnn4QQ0bK84DVgWtvgcRIvTrKFtlCffRJCRMvygteAaW2Dx0m8OMkW2kJ99kkIES3LMxOz3CeIsqD0tCmDK5kxaGgamCM+OyCGbD9Zf2nPGf+0ijbho8koTSxtWNXZoi6WRBj6l46/ZGJ9+ZemlPG/fuo/E38bgUz7y7Q/agn+KVzpFwy5bdC4TH+rO9LNf+qKOZGUbMYGNcNN4Zrlk6vpXeTimXUMTwQ+USqSoqIZGf/melKHxEcwE3+51rZtjdscBUqe0DAxSxs4H0GDpKhoRqb9ZdoffbBDNwnfejL9L9P/3MdcpH3UY/xRE7O1Ys7OoNj1A5w0vjocsT4fYYHgLXgTgWN1w9lKzNkpeV2POO2M/7QTDYeJj0z8bRzcKtAQ0oQEikaAVmLOTsnresRpZ9pfpv2lWehxM+Ej0/9sHDL9LwwcggmUCAVoJeZMJ7uVLY0pmUoNQF7kkWDSYEl+WlqYDsZA16M1nvHvB0CJnF/hJMKfwvcJnKKhhelgDDLxl/BIbDPtL9P+3ATsm4ZHpJHo/qV4ST1DC9PBGGT6n4RHwrgR979wxSxBSARHSBFLzDQUndSWFbSMTlB0AsUQB4rFSkKKOFgMmOgE7SATzOgERcdWDHGgWKwkpIjFnoaiE7S11OJGJyg6BcUQB4rFSkKK2CWMgOgE7UhsCKMTFJ2CYogDxWIlIUXsEkZAdIJ2JDaE0QmKTkExxIFisZKQInYJIyA6QTsSG8LoBEWnoBjiQLFYSUgRu4QREJ2gHYkNYXSColNQDHGgWKwkpIhdwgiITtCOxIYwOkHRKSiGOFAsVhJSxC5hBEQnaEdiQxidoOgUFEMcKBYrCSlilzACohO0I7EhjE5QdAqKIQ4Ui5WEFLFLGAHRCdqR2BBGJyg6BcUQB4rFSkKK2CWMgOgE7UhsCKMTFJ2CYogDxWIlIUXsEkZAdIJ2JDaE0QmKTkExxIFisZKQInYJIyA6QTsSG8LoBEWnoBjiQLFYSUgRu4QREJ2gHYkNYXSColNQDHHgWGFi1raUfrCaaiu6d5dGrE1GONlHukGjPqseiX00hKvzE+FEyJaS09ViSZ4CSSnjPxP/TPsLm65RH0l2Ik1HOBGZ/rfa0EVCIShsmfEnM/6sbvyJJ2bTYKiv+QaU6HhEBiHhQks6mfV9eqdv9PjESSg7Lp0Dhu+E8UNsddmTFGKAoHn4rS59o8enjP9M/DPtL9P/aCCgQU6GDzMw2MEhM/5QXDLjv2sRdc0n0mCkARFc2/OPnZjFgTgkGLMUpVClXj9UbAlUVmKWohSq1OuHii2BykrMUpRClXr9ULElUFmJWYpSqFKvHyq2BCorMUtRClXq9UPFlkBlJWYpSqFKvX6o2BKorMQsRSlUqdcPFVsClZWYpSiFKvX6oWJLoLISsxSlUKVeP1RsCVRWYpaiFKrU64eKLYHKSsxSlEKVev1QsSVQWYlZilKoUq8fKrYEKisxS1EKVer1Q8WWQGUlZilKoUq9fqjYEqisxCxFKVSp1w8VWwKVlfiK2QvSaKZhGXXDV0KPesTveAdOcuL3jh2iNetmZfxTBEyoVLw86pFM/CkUvOMaIhLjroUpoDUdOw3LSAxfCT3qkUz8KRSZ+MdtLrQO1ew8mkaahpVpfxQBExcVHI965FfZ/2hipt99pOtwuVUUiuNbiR/QtExwgUE7iWkNjYsevfeX8Z+Jv9s/S99C7PaalgkuUFpTKtQaGhfNTPvL9L/M+JcZ/+14kH6E+OXHH3XF7LJkgM6e4nPe5QZVGNdScy0yDZUZMwYTbfblvY5WcLiRKT7TGf8cBBMZc5JQCQySGFNhzMSfQkPxyLQ/3ZJ0A9GNSfG5RWX6HweBI2EPCZVA4SehCmOm/1FwKB6Z/qdbkm4g9CQQXS4zJ+VI3vdOBlLLrUlnmC0JaqD6Eo73IgqekYJo+0ZISXRFarm1pmwKamDGf+rzjxKglLB7ho5vJv4UAQpZpv2FgUS3D9uaVJsS1MBM/8v0P9qR8CMLI9JAImZE6PZlBJRkY+p/dmI2hU40n5TIRHGj0Cb0I7EEXqATCilQKojolNcHMv5pFIybsw5xJv6Z9pcc7kL7kA4mMNP/TAQkHAIz4w+FhcYYikdm/E2MJ+t5/sniW8yJ5Yw0V9uWjZgqj+uPKtDMFb5hk4rDPUsQgcaKV3OUBUYloccSzeILejMAZfxn4k8NI9P+kh2EaDu22m4snUcgdyg6EmTgpRFoVqb/ZcafzPi7Hua/eCtbd8lkd07KXG9nNcq36fkGt3r+rPcfWM6Hn+EtGc5JH5rWuEvBLD4y/jPx923BthN/zrQ/2z9sT7HnTP+jmPCgkTxsqwlcTWvcaTCLj8z4kxl/fFuw7cSf6zn+/D8o8IDFI4OMjAAAAABJRU5ErkJggg==)이 일차함수를 활용하여 데이터를 학습시키도록 하겠습니다.
###Code
from sklearn.linear_model import LinearRegression
short_lr = LinearRegression()
short_lr.fit(short_train_input, short_train_target)
###Output
_____no_output_____
###Markdown
학습한 직선 그래프로 나타내기
###Code
print(short_lr.coef_, short_lr.intercept_)
plt.scatter(short_Voltage, short_Ampare, color='red')
plt.plot([0, 6], [0*short_lr.coef_ + short_lr.intercept_, 6*short_lr.coef_+short_lr.intercept_], 'r')
plt.xlabel('Voltage')
plt.ylabel('Ampere')
plt.show()
###Output
_____no_output_____
###Markdown
성공적으로 길이가 긴 니크롬선의 전류를 흘려 보냈을때 전압과 전류 사이의 관계를 학습했습니다.위에 직선을 보았을때 위 직선은 일차함수의 그래프 이므로 전압과 전류의 세기는 비례한다라는걸증명할수 있습니다. 길이가 짧은 니크롬선의 전압과 전류 사이의 관계를 학습합니다. 데이터 준비
###Code
filepath = './dataset/long_line.csv'
long_df = pd.read_csv(filepath)
long_Voltage = np.array(short_df['Voltage'])
long_Ampare = np.array([0.08, 0.15, 0.22, 0.30])
long_train_input = long_Voltage.reshape(-1, 1)
long_train_target = long_Ampare
###Output
_____no_output_____
###Markdown
실험한 데이터 시각화하기
###Code
plt.scatter(long_train_input, long_train_target)
plt.xlabel('Voltage')
plt.ylabel('Ampere')
plt.show()
###Output
_____no_output_____
###Markdown
마찬가지로 이 산점도도 매우 선형적인 그래프를 가지고 있으므로 일차함수를 사용하여모델을 학습하도록 하겠습니다. 모델 학습하기
###Code
long_lr = LinearRegression()
long_lr.fit(long_train_input, long_train_target)
###Output
_____no_output_____
###Markdown
학습한 직선 시각화하기
###Code
print(long_lr.coef_, long_lr.intercept_)
plt.title('Voltage and Ampare Amalysis')
plt.scatter(short_Voltage, short_Ampare, color='red')
plt.scatter(long_Voltage, long_Ampare)
plt.plot([0, 6], [0*short_lr.coef_ + short_lr.intercept_, 6*short_lr.coef_+short_lr.intercept_], 'r')
plt.plot([0, 6], [0*long_lr.coef_ + long_lr.intercept_, 6*long_lr.coef_+long_lr.intercept_])
plt.xlabel('Voltage')
plt.ylabel('Ampere')
plt.show()
###Output
_____no_output_____
###Markdown
위 그래프로 우리가 알수있는것은 전기 회로에서 전압과 전류는 비례한다. 이때 전압이 일정하면니크롬선의 길이와 전류의 세기는 반비례한다. 라는것을 알수 있습니다. 데이터 예측하기전압이 20V일때 전류의 세기를 예측합니다.
###Code
long_lr.predict([[20]])
short_lr.predict([[20]])
###Output
_____no_output_____
###Markdown
학습한 그래프 시각화하기
###Code
plt.title('Voltage and Ampare Amalysis')
plt.scatter(short_Voltage, short_Ampare, color='red')
plt.scatter(long_Voltage, long_Ampare)
plt.scatter(20, 0.97833333, marker="^")
plt.scatter(20, 0.485, marker="^")
plt.plot([0, 20], [0*short_lr.coef_ + short_lr.intercept_, 20*short_lr.coef_+short_lr.intercept_], 'r')
plt.plot([0, 20], [0*long_lr.coef_ + long_lr.intercept_, 20*long_lr.coef_+long_lr.intercept_])
plt.xlabel('Voltage')
plt.ylabel('Ampere')
plt.show()
###Output
_____no_output_____ |
prepare_data/explore_and_convert_FDDB.ipynb | ###Markdown
The purpose of this script is to explore images/annotations of the FDDB dataset. Also it converts face ellipses into face bounding boxes. Also it converts annotations into json format.
###Code
IMAGES_DIR = os.path.expanduser('~/datasets/fddb/originalPics/')
BOXES_DIR = os.path.expanduser('~/datasets/fddb/FDDB-folds/')
RESULT_DIR = os.path.expanduser('data/fddb/val/')
###Output
_____no_output_____
###Markdown
Read data
###Code
# collect paths to all images
all_paths = []
for path, subdirs, files in tqdm(os.walk(IMAGES_DIR)):
for name in files:
all_paths.append(os.path.join(path, name))
metadata = pd.DataFrame(all_paths, columns=['full_path'])
# strip root folder
metadata['path'] = metadata.full_path.apply(lambda x: os.path.relpath(x, IMAGES_DIR))
# all unique endings
metadata.path.apply(lambda x: x.split('.')[-1]).unique()
# number of images
len(metadata)
annotation_files = os.listdir(BOXES_DIR)
annotation_files = [f for f in annotation_files if f.endswith('ellipseList.txt')]
annotation_files = [os.path.join(BOXES_DIR, f) for f in annotation_files]
def ellipse_to_box(major_axis_radius, minor_axis_radius, angle, center_x, center_y):
half_h = major_axis_radius * np.sin(-angle)
half_w = minor_axis_radius * np.sin(-angle)
xmin, xmax = center_x - half_w, center_x + half_w
ymin, ymax = center_y - half_h, center_y + half_h
return xmin, ymin, xmax, ymax
def get_boxes(path):
with open(path, 'r') as f:
content = f.readlines()
content = [s.strip() for s in content]
boxes = {}
num_lines = len(content)
i = 0
name = None
while i < num_lines:
s = content[i]
if 'big/img' in s:
if name is not None:
assert len(boxes[name]) == num_boxes
name = s + '.jpg'
boxes[name] = []
i += 1
num_boxes = int(content[i])
i += 1
else:
numbers = [float(f) for f in s.split(' ')[:5]]
major_axis_radius, minor_axis_radius, angle, center_x, center_y = numbers
xmin, ymin, xmax, ymax = ellipse_to_box(
major_axis_radius, minor_axis_radius,
angle, center_x, center_y
)
if xmin == xmax or ymin == ymax:
num_boxes -= 1
else:
boxes[name].append((
min(xmin, xmax), min(ymin, ymax),
max(xmin, xmax), max(ymin, ymax)
))
i += 1
return boxes
boxes = {}
for p in annotation_files:
boxes.update(get_boxes(p))
# check number of images with annotations
# and number of boxes
# (these values are taken from the official website)
assert len(boxes) == 2845
assert sum(len(b) for b in boxes.values()) == 5171 - 1 # one box is empty
metadata = metadata.loc[metadata.path.apply(lambda x: x in boxes)]
metadata = metadata.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Show bounding boxes
###Code
def draw_boxes_on_image(path, boxes):
image = Image.open(path)
draw = ImageDraw.Draw(image, 'RGBA')
width, height = image.size
for b in boxes:
xmin, ymin, xmax, ymax = b
fill = (255, 255, 255, 45)
outline = 'red'
draw.rectangle(
[(xmin, ymin), (xmax, ymax)],
fill=fill, outline=outline
)
return image
i = random.randint(0, len(metadata) - 1) # choose a random image
some_boxes = boxes[metadata.path[i]]
draw_boxes_on_image(metadata.full_path[i], some_boxes)
###Output
_____no_output_____
###Markdown
Convert
###Code
def get_annotation(path, name, width, height):
annotation = {
"filename": name,
"size": {"depth": 3, "width": width, "height": height}
}
objects = []
for b in boxes[path]:
xmin, ymin, xmax, ymax = b
objects.append({"bndbox": {"ymin": ymin, "ymax": ymax, "xmax": xmax, "xmin": xmin}, "name": "face"})
annotation["object"] = objects
return annotation
if not os.path.exists(RESULT_DIR):
os.makedirs(RESULT_DIR)
shutil.rmtree(RESULT_DIR, ignore_errors=True)
os.mkdir(RESULT_DIR)
os.mkdir(os.path.join(RESULT_DIR, 'images'))
os.mkdir(os.path.join(RESULT_DIR, 'annotations'))
for T in tqdm(metadata.itertuples()):
# get width and height of an image
image = cv2.imread(T.full_path)
h, w, c = image.shape
assert c == 3
# name of the image
name = '-'.join(T.path.split('/')[:3]) + '_' + T.path.split('/')[-1]
assert name.endswith('.jpg')
# copy the image
shutil.copy(T.full_path, os.path.join(RESULT_DIR, 'images', name))
# save annotation for it
d = get_annotation(T.path, name, w, h)
json_name = name[:-4] + '.json'
json.dump(d, open(os.path.join(RESULT_DIR, 'annotations', json_name), 'w'))
###Output
0it [00:00, ?it/s]
###Markdown
The purpose of this script is to explore images/annotations of the FDDB dataset. Also it converts face ellipses into face bounding boxes. Also it converts annotations into json format.
###Code
IMAGES_DIR = '/home/gpu2/hdd/dan/FDDB/originalPics/'
BOXES_DIR = '/home/gpu2/hdd/dan/FDDB/FDDB-folds/'
RESULT_DIR = '/home/gpu2/hdd/dan/FDDB/val/'
###Output
_____no_output_____
###Markdown
Read data
###Code
# collect paths to all images
all_paths = []
for path, subdirs, files in tqdm(os.walk(IMAGES_DIR)):
for name in files:
all_paths.append(os.path.join(path, name))
metadata = pd.DataFrame(all_paths, columns=['full_path'])
# strip root folder
metadata['path'] = metadata.full_path.apply(lambda x: os.path.relpath(x, IMAGES_DIR))
# all unique endings
metadata.path.apply(lambda x: x.split('.')[-1]).unique()
# number of images
len(metadata)
annotation_files = os.listdir(BOXES_DIR)
annotation_files = [f for f in annotation_files if f.endswith('ellipseList.txt')]
annotation_files = [os.path.join(BOXES_DIR, f) for f in annotation_files]
def ellipse_to_box(major_axis_radius, minor_axis_radius, angle, center_x, center_y):
half_h = major_axis_radius * np.sin(-angle)
half_w = minor_axis_radius * np.sin(-angle)
xmin, xmax = center_x - half_w, center_x + half_w
ymin, ymax = center_y - half_h, center_y + half_h
return xmin, ymin, xmax, ymax
def get_boxes(path):
with open(path, 'r') as f:
content = f.readlines()
content = [s.strip() for s in content]
boxes = {}
num_lines = len(content)
i = 0
name = None
while i < num_lines:
s = content[i]
if 'big/img' in s:
if name is not None:
assert len(boxes[name]) == num_boxes
name = s + '.jpg'
boxes[name] = []
i += 1
num_boxes = int(content[i])
i += 1
else:
numbers = [float(f) for f in s.split(' ')[:5]]
major_axis_radius, minor_axis_radius, angle, center_x, center_y = numbers
xmin, ymin, xmax, ymax = ellipse_to_box(
major_axis_radius, minor_axis_radius,
angle, center_x, center_y
)
if xmin == xmax or ymin == ymax:
num_boxes -= 1
else:
boxes[name].append((
min(xmin, xmax), min(ymin, ymax),
max(xmin, xmax), max(ymin, ymax)
))
i += 1
return boxes
boxes = {}
for p in annotation_files:
boxes.update(get_boxes(p))
# check number of images with annotations
# and number of boxes
# (these values are taken from the official website)
assert len(boxes) == 2845
assert sum(len(b) for b in boxes.values()) == 5171 - 1 # one box is empty
metadata = metadata.loc[metadata.path.apply(lambda x: x in boxes)]
metadata = metadata.reset_index(drop=True)
###Output
_____no_output_____
###Markdown
Show bounding boxes
###Code
def draw_boxes_on_image(path, boxes):
image = Image.open(path)
draw = ImageDraw.Draw(image, 'RGBA')
width, height = image.size
for b in boxes:
xmin, ymin, xmax, ymax = b
fill = (255, 255, 255, 45)
outline = 'red'
draw.rectangle(
[(xmin, ymin), (xmax, ymax)],
fill=fill, outline=outline
)
return image
i = random.randint(0, len(metadata) - 1) # choose a random image
some_boxes = boxes[metadata.path[i]]
draw_boxes_on_image(metadata.full_path[i], some_boxes)
###Output
_____no_output_____
###Markdown
Convert
###Code
def get_annotation(path, name, width, height):
annotation = {
"filename": name,
"size": {"depth": 3, "width": width, "height": height}
}
objects = []
for b in boxes[path]:
xmin, ymin, xmax, ymax = b
objects.append({"bndbox": {"ymin": ymin, "ymax": ymax, "xmax": xmax, "xmin": xmin}, "name": "face"})
annotation["object"] = objects
return annotation
shutil.rmtree(RESULT_DIR, ignore_errors=True)
os.mkdir(RESULT_DIR)
os.mkdir(os.path.join(RESULT_DIR, 'images'))
os.mkdir(os.path.join(RESULT_DIR, 'annotations'))
for T in tqdm(metadata.itertuples()):
# get width and height of an image
image = cv2.imread(T.full_path)
h, w, c = image.shape
assert c == 3
# name of the image
name = '-'.join(T.path.split('/')[:3]) + '_' + T.path.split('/')[-1]
assert name.endswith('.jpg')
# copy the image
shutil.copy(T.full_path, os.path.join(RESULT_DIR, 'images', name))
# save annotation for it
d = get_annotation(T.path, name, w, h)
json_name = name[:-4] + '.json'
json.dump(d, open(os.path.join(RESULT_DIR, 'annotations', json_name), 'w'))
###Output
_____no_output_____ |
2.Datasets_for_NLP/1_strings_in_python.ipynb | ###Markdown
**Strings declaration**
###Code
# Strings can be specified using single quotes
monty = 'Monty Python'
print(monty)
# ... or double quotes
circus = "Monty Python's Flying Circus"
print(circus)
# If a string contains a single quote, we must backslash-escape the quote
circus = 'Monty Python\'s Flying Circus'
print(circus)
# Sometimes strings go over several lines.
# Python provides us with various ways of entering them:
# a) Using backslash
couplet = "Shall I compare thee to a Summer's day?"\
"Thou are more lovely and more temperate:"
print(couplet)
# b) Using parentheses:
couplet = ("Rough winds do shake the darling buds of May,"
"And Summer's lease hath all too short a date:")
print(couplet)
# c) Using a triple-quoted string (to keep the newlines):
couplet = """Shall I compare thee to a Summer's day?
Thou are more lovely and more temperate:"""
print(couplet)
couplet = '''Rough winds do shake the darling buds of May,
And Summer's lease hath all too short a date:'''
print(couplet)
###Output
Shall I compare thee to a Summer's day?Thou are more lovely and more temperate:
Rough winds do shake the darling buds of May,And Summer's lease hath all too short a date:
Shall I compare thee to a Summer's day?
Thou are more lovely and more temperate:
Rough winds do shake the darling buds of May,
And Summer's lease hath all too short a date:
###Markdown
**Basic operations**
###Code
# Concatenation
display('very' + 'very' + 'very')
display('very' * 4)
# Accessing individual characters
display(monty)
display(monty[0])
display(monty[3])
display(monty[5])
# -1 is the index of the last character
display(monty[-1])
# -N is the index of the last N character
display(monty[-2])
# Iterate characters in strings
sent = 'colorless green ideas sleep furiously'
for char in sent:
print(char, end=' ')
###Output
c o l o r l e s s g r e e n i d e a s s l e e p f u r i o u s l y
###Markdown
**Substrings**
###Code
# We use [ ] for slides (e.g. for substrings or sublists)
# It starts at the first index but finishes one before the end index
display(monty)
display(monty[6:10])
display(monty[-4:-1])
# If we omit the first value, the substring begins at the start of the string or list
display(monty[:5])
# If we omit the second value, the substring continues to the end of the string or list
display(monty[6:])
###Output
_____no_output_____
###Markdown
**Basic search**
###Code
# We can check if a string is contained in other using the in operator:
phrase = 'And now for something completely different'
if 'thing' in phrase:
print('found "thing"')
# We can also find the position of a string within other using find():
monty.find('Python')
###Output
found "thing"
###Markdown
**Other useful methods**|Method | Functionality||------|------||`s.find(t)` | index of first instance of string t inside `s` (`-1` if not found)||`s.rfind(t)` | index of last instance of string t inside `s` (`-1` if not found)||`s.index(t)` | like `s.find(t)` except it raises `ValueError` if not found||`s.rindex(t)` | like `s.rfind(t)` except it raises `ValueError` if not found||`s.join(text)` | combine the words of the text into a string using `s` as the glue||`s.split(t)` | split `s` into a list wherever a `t` is found (whitespace by default)||`s.splitlines()` | split `s` into a list of strings, one per line||`s.lower()` | a lowercased version of the string `s`||`s.upper()` | an uppercased version of the string `s`||`s.title()` | a titlecased version of the string `s`||`s.strip()` | a copy of `s` without leading or trailing whitespace||`s.replace(t, u)` | replace instances of `t` with `u` inside `s`| **Unicode**
###Code
print(ord('ñ'))
# 241 in decimal is the same than 0x00F1 in hexadecimal
n_tilde = '\u00F1'
print(n_tilde)
python_emoji = '\U0001F40D'
print('Learning', python_emoji)
###Output
241
ñ
Learning 🐍
###Markdown
**Regular expressions**
###Code
import re
txt = "The rain in Spain"
x = re.findall("ai", txt)
print(x)
# r prefix to a string indicates that the string is a raw string
# (i.e. backslashes \ should be treated literally and not as escape characters)
x = re.search(r"\bS\w+", txt) # Match object
# The Match object has properties and methods used to retrieve information about the search, and the result:
# .span() returns a tuple containing the start-, and end positions of the match
# .string returns the string passed into the function
# .group() returns the part of the string where there was a match
print(x.span())
print(x.string)
print(x.group())
###Output
['ai', 'ai']
(12, 17)
The rain in Spain
Spain
|
LS_DS_141_Statistics_Probability_and_Inference_Ale_Ruperti.ipynb | ###Markdown
Lambda School Data Science Module 141 Statistics, Probability, and Inference Prepare - examine what's available in SciPyAs we delve into statistics, we'll be using more libraries - in particular the [stats package from SciPy](https://docs.scipy.org/doc/scipy/reference/tutorial/stats.html).
###Code
from scipy import stats
dir(stats)
# As usual, lots of stuff here! There's our friend, the normal distribution
norm = stats.norm()
print(norm.mean())
print(norm.std())
print(norm.var())
# And a new friend - t
t1 = stats.t(5) # 5 is df "shape" parameter
print(t1.mean())
print(t1.std())
print(t1.var())
t1.std()**2
###Output
_____no_output_____
###Markdown
![T distribution PDF with different shape parameters](https://upload.wikimedia.org/wikipedia/commons/4/41/Student_t_pdf.svg)*(Picture from [Wikipedia](https://en.wikipedia.org/wiki/Student's_t-distribution/media/File:Student_t_pdf.svg))*The t-distribution is "normal-ish" - the larger the parameter (which reflects its degrees of freedom - more input data/features will increase it), the closer to true normal.
###Code
t2 = stats.t(30) # Will be closer to normal
print(t2.mean())
print(t2.std())
print(t2.var())
###Output
0.0
1.0350983390135313
1.0714285714285714
###Markdown
Why is it different from normal? To better reflect the tendencies of small data and situations with unknown population standard deviation. In other words, the normal distribution is still the nice pure ideal in the limit (thanks to the central limit theorem), but the t-distribution is much more useful in many real-world situations.History sidenote - this is "Student":![William Sealy Gosset](https://upload.wikimedia.org/wikipedia/commons/4/42/William_Sealy_Gosset.jpg)*(Picture from [Wikipedia](https://en.wikipedia.org/wiki/File:William_Sealy_Gosset.jpg))*His real name is William Sealy Gosset, and he published under the pen name "Student" because he was not an academic. He was a brewer, working at Guinness and using trial and error to determine the best ways to yield barley. He's also proof that, even 100 years ago, you don't need official credentials to do real data science! Live Lecture - let's perform and interpret a t-testWe'll generate our own data, so we can know and alter the "ground truth" that the t-test should find. We will learn about p-values and how to interpret "statistical significance" based on the output of a hypothesis test.
###Code
# TODO - during class, but please help!
survey_data = [0, 1, 0, 1, 0, 0, 0, 1, 1, 0, 0, 1, 1, 1, 1, 1, 1, 1,
0, 0, 1, 1, 1, 1, 1, 1, 1, 1, 1, 0, 1, 1, 1, 1, 1, 1,
1, 0, 1, 0, 1, 1, 0, 1, 0, 1, 1, 0, 0, 0]
import numpy as np
import pandas as pd
df = pd.DataFrame(survey_data)
df.describe()
df.plot.hist()
# Now with confidence!
import scipy
scipy.stats.ttest_1samp(survey_data, 0.5)
# the t-statistic is the ratio of the departure of the estimated value of a
# parameter from its hypothesized value to its standard error
# We want to calculate: tstat = 2.364321853156195
sample_stderr = 0.478518 / np.sqrt(len(survey_data))
sample_mean = 0.660000
null_hypothesis_mean = 0.5
t_stat = (sample_mean - null_hypothesis_mean) / sample_stderr
print(t_stat)
len(survey_data)
# Science! Reproducibility...
import random
def make_soda_data(n=50):
return pd.DataFrame([random.randint(0, 1) for _ in range(n)])
make_soda_data().describe()
t_statistics = []
n_experiments = 10000
for _ in range(n_experiments):
df = make_soda_data()
ttest = scipy.stats.ttest_1samp(df, 0.5)
t_statistics.append(ttest.statistic)
pd.DataFrame(t_)
###Output
_____no_output_____
###Markdown
Assignment - apply the t-test to real dataYour assignment is to determine which issues have "statistically significant" differences between political parties in this [1980s congressional voting data](https://archive.ics.uci.edu/ml/datasets/Congressional+Voting+Records). The data consists of 435 instances (one for each congressperson), a class (democrat or republican), and 16 binary attributes (yes or no for voting for or against certain issues). Be aware - there are missing values!Your goals:1. Load and clean the data (or determine the best method to drop observations when running tests)2. Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.013. Using hypothesis testing, find an issue that republicans support more than democrats with p < 0.014. Using hypothesis testing, find an issue where the difference between republicans and democrats has p > 0.1 (i.e. there may not be much of a difference)Note that this data will involve *2 sample* t-tests, because you're comparing averages across two groups (republicans and democrats) rather than a single group against a null hypothesis.Stretch goals:1. Refactor your code into functions so it's easy to rerun with arbitrary variables2. Apply hypothesis testing to your personal project data (for the purposes of this notebook you can type a summary of the hypothesis you formed and tested)
###Code
### 1) LOAD AND CLEAN THE DATA ###
data = pd.read_csv('https://archive.ics.uci.edu/ml/machine-learning-databases/voting-records/house-votes-84.data',
names = ['Party', 'Vote 1', 'Vote 2', 'Vote 3','Vote 4','Vote 5','Vote 6','Vote 7','Vote 8','Vote 9','Vote 10','Vote 11','Vote 12','Vote 13','Vote 14','Vote 15', 'Vote 16'])
data.head()
# Loaded data and renamed columns along attribute info provided with data set.
# We have the voting record of all 435 congressmen on 16 issues, delineated by the party affiliation of each congressman.
# Missing values are most likely abstentions: The congressman/woman abstained from voting on the issue.
# In order to count them, we need to count the '?' since pandas doesn't treat them as NaNs:
attributes = data.apply(pd.value_counts)
attributes.head(1)
# Well there's a TON of abstentions. No surprise there.
# In order to calculate t tests later on I'm going to convert votes from strings into integers
# -1 for N, 0, for ?, and 1 for Y.
data.replace(to_replace = 'n', value = -1, inplace=True)
data.replace(to_replace = 'y', value = 1, inplace=True)
data.replace(to_replace = '?', value = 0, inplace=True)
data.head()
### 2) Using hypothesis testing, find an issue that democrats support more than republicans with p < 0.01 ###
# Let's start by splitting up data into 2 dataframes. One for democrats and another for republicans.
democrats = data.loc[data['Party']=='democrat']
republicans = data.loc[data['Party']=='republican']
republicans.head()
democrats.describe()
republicans.describe()
mean_votes = pd.DataFrame(
{'Democrats': democrats.mean(),
'Republicans': republicans.mean()
})
mean_votes
###Output
_____no_output_____
###Markdown
**Just eye-balling it with the mean, we can see that Vote 3 is really favored by Dems over Reps. **
###Code
###Output
_____no_output_____ |
03 - GridSearchCV - LGBM.ipynb | ###Markdown
03 - GridSearchCV - LGBM Imports
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
sns.set(style="white")
###Output
_____no_output_____
###Markdown
Constants
###Code
n_components = 1000
models_folder = "models/"
train_data_fn = models_folder+'train_data.pkl'
target_fn = models_folder+'target.pkl'
test_data_fn = models_folder+'test_data.pkl'
weight_multiplier_fn = models_folder+"weight_multiplier.pkl"
###Output
_____no_output_____
###Markdown
Functions
###Code
import os.path
from sklearn.externals import joblib
def Load(filename):
if os.path.isfile(filename):
return joblib.load(filename)
def Save(obj, filename):
joblib.dump(obj, filename)
###Output
_____no_output_____
###Markdown
Loading data
###Code
import scipy
data = scipy.sparse.load_npz("train_sparse_matrix_after_scale.npz")
target = Load(target_fn)
weight_multiplier = Load(weight_multiplier_fn)
###Output
_____no_output_____
###Markdown
Splitting dataset
###Code
from sklearn.model_selection import train_test_split
X_train, X_validation, Y_train, Y_validation = train_test_split(data, target.ravel(), train_size=0.8, random_state=42)
###Output
/home/aavdeev/anaconda3/lib/python3.6/site-packages/sklearn/model_selection/_split.py:2026: FutureWarning: From version 0.21, test_size will always complement train_size unless both are specified.
FutureWarning)
###Markdown
CatBoost Classifier
###Code
import lightgbm as lgbm
import re
tuned_parameters = {
'num_leaves': [50,1000,10000,10000],
'max_depth':[10,20,30,40],
'min_child_samples':[30,50,100],
'max_bin':[50,100,200],
'subsample':[0.1,0.4,0.7],
'subsample_freq':[2,30,100],
'colsample_bytree':[0.2,0.3,0.7],
'min_child_weight':[2,3,6],
'subsample_for_bin':[10,100,200],
'min_split_gain':[1.1,2.0,10.0],
'reg_alpha':[2,3,5,7,8],
'reg_lambda':[0,0.2,0.8],
'metric':['auc'],
'learning_rate':[0.05,0.1,0.005],
'objective':['binary'],
'scale_pos_weight':[1,weight_multiplier,1/weight_multiplier],
}
%%time
from sklearn.model_selection import GridSearchCV,RandomizedSearchCV
clf = RandomizedSearchCV(lgbm.LGBMClassifier(nthread=8, verbose_eval=32),
tuned_parameters,
cv=4,
n_iter=100,
scoring='roc_auc',
random_state=42,
verbose=2)
%%time
clf.fit(X_train, Y_train)
def report(results, n_top=3):
for i in range(1, n_top + 1):
candidates = np.flatnonzero(results['rank_test_score'] == i)
for candidate in candidates:
print("Model with rank: {0}".format(i))
print("Mean validation score: {0:.3f} (std: {1:.3f})".format(
results['mean_test_score'][candidate],
results['std_test_score'][candidate]))
print("Parameters: {0}".format(results['params'][candidate]))
print("")
print("RandomizedSearchCV")
report(clf.cv_results_)
params = clf.best_params_
# params = {'subsample_freq': 2, 'subsample_for_bin': 100, 'subsample': 0.7, 'scale_pos_weight': 1, 'reg_lambda': 0.2, 'reg_alpha': 7, 'objective': 'binary', 'num_leaves': 50, 'min_split_gain': 2.0, 'min_child_weight': 3, 'min_child_samples': 100, 'metric': 'auc', 'max_depth': 20, 'max_bin': 100, 'learning_rate': 0.1, 'colsample_bytree': 0.7}
evals_results = {}
num_boost_round=3000
early_stopping_rounds=200
feval=None
model = lgbm.train(params,
d_train,
valid_sets=[d_train, d_valid],
valid_names=['train','valid'],
evals_result=evals_results,
num_boost_round=num_boost_round,
early_stopping_rounds=early_stopping_rounds,
verbose_eval=10,
feval=feval)
n_estimators = model.best_iteration
print("\nModel Report")
print("n_estimators : ", n_estimators)
print("AUC"+":", evals_results['valid']['auc'][n_estimators-1])
from sklearn.metrics import roc_auc_score
predicted = model.predict(X_validation)
print("ROC AUC score:",roc_auc_score(Y_validation, predicted))
Save(model,"lgbm_model.pkl")
###Output
_____no_output_____
###Markdown
Test Data
###Code
test_data = scipy.sparse.load_npz("test_sparse_matrix_after_scale.npz")
Y_test = model.predict(test_data, num_iteration=model.best_iteration)
print(Y_test.max())
print(Y_test.mean())
###Output
_____no_output_____
###Markdown
Saving test predictions
###Code
predictions = pd.DataFrame(Y_test)
predictions.to_csv("solution_lgbm.csv",header=None, index=None)
###Output
_____no_output_____ |
nbs/02-ppo.ipynb | ###Markdown
PPO for transformer models> A Pytorch implementation of Proximal Policy Optimization for transfomer models. This follows the language model approach proposed in paper ["Fine-Tuning Language Models from Human Preferences"](https://arxiv.org/pdf/1909.08593.pdf) and is similar to the [original implementation](https://github.com/openai/lm-human-preferences). The two main differences are 1) the method is implemented in Pytorch and 2) works with the `transformer` library by Hugging Face.
###Code
# default_exp ppo
# export
import numpy as np
import torch.nn.functional as F
from torch.optim import Adam
import torch
import collections
import time
import random
from trl_custom.core import (logprobs_from_logits,
whiten,
clip_by_value,
entropy_from_logits,
flatten_dict,
average_torch_dicts,
stats_to_np,
stack_dicts,
add_suffix)
###Output
_____no_output_____
###Markdown
KL-controllersTo ensure that the learned policy does not deviate too much from the original language model the KL divergence between the policy and a reference policy (the language model before PPO training) is used as an additional reward signal. Large KL-divergences are punished and staying close to the reference is rewarded.Two controllers are presented in the paper: an adaptive log-space proportional controller and a fixed controller.
###Code
# exports
class AdaptiveKLController:
"""
Adaptive KL controller described in the paper:
https://arxiv.org/pdf/1909.08593.pdf
"""
def __init__(self, init_kl_coef, target, horizon):
self.value = init_kl_coef
self.target = target
self.horizon = horizon
def update(self, current, n_steps):
target = self.target
proportional_error = np.clip(current / target - 1, -0.2, 0.2)
mult = 1 + proportional_error * n_steps / self.horizon
self.value *= mult
# exports
class FixedKLController:
"""Fixed KL controller."""
def __init__(self, kl_coef):
self.value = kl_coef
def update(self, current, n_steps):
pass
# exports
class PPOTrainer:
"""
The PPO_trainer uses Proximal Policy Optimization to optimise language models.
"""
default_params = {
"lr": 1.41e-5,
"adap_kl_ctrl": True,
"init_kl_coef":0.2,
"target": 6,
"horizon":10000,
"gamma":1,
"lam":0.95,
"cliprange": .2,
"cliprange_value":.2,
"vf_coef":.1,
"batch_size": 256,
"forward_batch_size": 16,
"ppo_epochs": 4,
}
def __init__(self, policy_model, ref_model, value_model, **ppo_params):
"""
Initialize PPOTrainer.
Args:
model (torch.model): Hugging Face transformer model with value head
ref_model (torch.model): Hugging Face transformer reference model used for KL penalty
ppo_params (dict or None): PPO parameters for training. Can include following keys:
'lr' (float): Adam learning rate, default: 1.41e-5
'batch_size' (int): Number of samples per optimisation step, default: 256
'forward_batch_size' (int): Number of samples forward passed through model at a time, default: 16
'ppo_epochs' (int): Number of optimisation epochs per batch of samples, default: 4
'gamma' (float)): Gamma parameter for advantage calculation, default: 1.
'lam' (float): Lambda parameter for advantage calcualation, default: 0.95
'cliprange_value' (float): Range for clipping values in loss calculation, default: 0.2
'cliprange' (float): Range for clipping in PPO policy gradient loss, default: 0.2
'vf_coef' (float): Scaling factor for value loss, default: 0.1
'adap_kl_ctrl' (bool): Use adaptive KL control, otherwise linear, default: True
'init_kl_coef' (float): Initial KL penalty coefficient (used for adaptive and linear control), default: 0.2
'target' (float): Target KL value for adaptive KL control, default: 6.0
'horizon' (float): Horizon for adaptive KL control, default: 10000
"""
self.ppo_params = self.default_params
self.ppo_params.update(ppo_params)
self.ref_model = ref_model
self.policy_model = policy_model
self.value_model = value_model
self.policy_optimizer = Adam(policy_model.parameters(), lr=self.ppo_params['lr'])
self.value_optimizer = Adam(value_model.parameters(), lr=self.ppo_params['lr'])
if self.ppo_params['adap_kl_ctrl']:
self.kl_ctl = AdaptiveKLController(self.ppo_params['init_kl_coef'],
self.ppo_params['target'],
self.ppo_params['horizon'])
else:
self.kl_ctl = FixedKLController(self.ppo_params['init_kl_coef'])
def step(self, query, response, scores):
"""
Run a PPO optimisation step.
args:
query (torch.tensor): tensor containing the encoded queries, shape [batch_size, query_length]
response (torch.tensor): tensor containing the encoded responses, shape [batch_size, response_length]
scores (torch.tensor): tensor containing the scores, shape [batch_size]
returns:
train_stats (dict): a summary of the training statistics
"""
bs = self.ppo_params['batch_size']
timing = dict()
t0 = time.time()
gen_len = response.shape[1]
model_input = torch.cat((query, response), axis=1)
t = time.time()
logprobs, ref_logprobs, values = self.batched_forward_pass(model_input, gen_len)
timing['time/ppo/forward_pass'] = time.time()-t
t = time.time()
rewards, non_score_reward, kl_coef = self.compute_rewards(scores, logprobs, ref_logprobs)
timing['time/ppo/compute_rewards'] = time.time()-t
t = time.time()
all_stats = []
idxs = list(range(bs))
for _ in range(self.ppo_params['ppo_epochs']):
random.shuffle(idxs)
for i in range(bs):
idx = idxs[i]
train_stats = self.train_minibatch(logprobs[idx:idx+1], values[idx:idx+1],
rewards[idx:idx+1],
response[idx:idx+1], model_input[idx:idx+1])
all_stats.append(train_stats)
timing['time/ppo/optimize_step'] = time.time()-t
t = time.time()
train_stats = stack_dicts(all_stats)
# reshape advantages/ratios such that they are not averaged.
train_stats['policy/advantages'] = torch.flatten(train_stats['policy/advantages']).unsqueeze(0)
train_stats['policy/ratio'] = torch.flatten(train_stats['policy/ratio']).unsqueeze(0)
stats = self.record_step_stats(scores=scores, logprobs=logprobs, ref_logprobs=ref_logprobs,
non_score_reward=non_score_reward, train_stats=train_stats,
kl_coef=kl_coef)
stats = stats_to_np(stats)
timing['time/ppo/calc_stats'] = time.time()-t
self.kl_ctl.update(stats['objective/kl'], self.ppo_params['batch_size'])
timing['time/ppo/total'] = time.time()-t0
stats.update(timing)
return stats
def batched_forward_pass(self, model_input, gen_len):
"""Calculate model outputs in multiple batches."""
bs = self.ppo_params['batch_size']
fbs = self.ppo_params['forward_batch_size']
logprobs = []
ref_logprobs = []
values = []
for i in range(int(self.ppo_params['batch_size']/fbs)):
m_input = model_input[i*fbs:(i+1)*fbs]
logits, _, _ = self.policy_model(m_input)
_, _, v = self.value_model(m_input)
ref_logits, _, _ = self.ref_model(m_input)
values.append(v[:, -gen_len-1:-1].detach())
logprobs.append(logprobs_from_logits(logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
ref_logprobs.append(logprobs_from_logits(ref_logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
return torch.cat(logprobs), torch.cat(ref_logprobs), torch.cat(values)
def train_minibatch(self, logprobs, values, rewards, response, model_input):
"""Train one PPO minibatch"""
loss_p, train_stats = self.loss_policy(logprobs, values, rewards, response, model_input)
loss_v = self.loss_value(values, rewards, response, model_input)
self.policy_optimizer.zero_grad()
self.value_optimizer.zero_grad()
loss_p.backward()
loss_v.backward()
self.policy_optimizer.step()
self.value_optimizer.step()
return train_stats
def compute_rewards(self, scores, logprobs, ref_logprobs):
"""Compute per token rewards from scores and KL-penalty."""
kl = logprobs - ref_logprobs
non_score_reward = -self.kl_ctl.value * kl
rewards = non_score_reward.clone().detach()
rewards[:, -1] += scores
return rewards, non_score_reward, self.kl_ctl.value
def loss_value(self, values, rewards, response, model_input):
"""Calculate value loss"""
lastgaelam = 0
advantages_reversed = []
gen_len = response.shape[1]
for t in reversed(range(gen_len)):
nextvalues = values[:, t + 1] if t < gen_len - 1 else 0.0
delta = rewards[:, t] + self.ppo_params['gamma'] * nextvalues - values[:, t]
lastgaelam = delta + self.ppo_params['gamma'] * self.ppo_params['lam'] * lastgaelam
advantages_reversed.append(lastgaelam)
advantages = torch.stack(advantages_reversed[::-1]).transpose(0, 1)
returns = advantages + values
advantages = whiten(advantages)
advantages = advantages.detach()
logits, _, _ = self.policy_model(model_input)
_, _, vpred = self.value_model(model_input)
logprob = logprobs_from_logits(logits[:,:-1,:], model_input[:, 1:])
#only the generation part of the values/logprobs is needed
logprob, vpred = logprob[:, -gen_len:], vpred[:,-gen_len-1:-1]
vpredclipped = clip_by_value(vpred,
values - self.ppo_params["cliprange_value"],
values + self.ppo_params["cliprange_value"])
vf_losses1 = (vpred - returns)**2
vf_losses2 = (vpredclipped - returns)**2
vf_loss = .5 * torch.mean(torch.max(vf_losses1, vf_losses2))
return self.ppo_params['vf_coef'] * vf_loss
def loss_policy(self, old_logprobs, values, rewards, response, model_input):
"""Calculate policy loss."""
lastgaelam = 0
advantages_reversed = []
gen_len = response.shape[1]
for t in reversed(range(gen_len)):
nextvalues = values[:, t + 1] if t < gen_len - 1 else 0.0
delta = rewards[:, t] + self.ppo_params['gamma'] * nextvalues - values[:, t]
lastgaelam = delta + self.ppo_params['gamma'] * self.ppo_params['lam'] * lastgaelam
advantages_reversed.append(lastgaelam)
advantages = torch.stack(advantages_reversed[::-1]).transpose(0, 1)
returns = advantages + values
advantages = whiten(advantages)
advantages = advantages.detach()
logits, _, _ = self.policy_model(model_input)
_, _, vpred = self.value_model(model_input)
logprob = logprobs_from_logits(logits[:,:-1,:], model_input[:, 1:])
#only the generation part of the values/logprobs is needed
logprob, vpred = logprob[:, -gen_len:], vpred[:,-gen_len-1:-1]
vpredclipped = clip_by_value(vpred,
values - self.ppo_params["cliprange_value"],
values + self.ppo_params["cliprange_value"])
vf_losses1 = (vpred - returns)**2
vf_losses2 = (vpredclipped - returns)**2
vf_loss = .5 * torch.mean(torch.max(vf_losses1, vf_losses2))
vf_clipfrac = torch.mean(torch.gt(vf_losses2, vf_losses1).double())
ratio = torch.exp(logprob - old_logprobs)
pg_losses = -advantages * ratio
pg_losses2 = -advantages * torch.clamp(ratio,
1.0 - self.ppo_params['cliprange'],
1.0 + self.ppo_params['cliprange'])
pg_loss = torch.mean(torch.max(pg_losses, pg_losses2))
pg_clipfrac = torch.mean(torch.gt(pg_losses2, pg_losses).double())
entropy = torch.mean(entropy_from_logits(logits))
approxkl = .5 * torch.mean((logprob - old_logprobs)**2)
policykl = torch.mean(logprob - old_logprobs)
return_mean, return_var = torch.mean(returns), torch.var(returns)
value_mean, value_var = torch.mean(values), torch.var(values)
stats = dict(
loss=dict(policy=pg_loss, value=vf_loss),
policy=dict(entropy=entropy, approxkl=approxkl,policykl=policykl, clipfrac=pg_clipfrac,
advantages=advantages, advantages_mean=torch.mean(advantages), ratio=ratio),
returns=dict(mean=return_mean, var=return_var),
val=dict(vpred=torch.mean(vpred), error=torch.mean((vpred - returns) ** 2),
clipfrac=vf_clipfrac, mean=value_mean, var=value_var),
)
return pg_loss, flatten_dict(stats)
def record_step_stats(self, kl_coef, **data):
"""Record training step statistics."""
kl = data['logprobs'] - data['ref_logprobs']
mean_kl = torch.mean(torch.sum(kl, axis=-1))
mean_entropy = torch.mean(torch.sum(-data['logprobs'], axis=1))
mean_non_score_reward =torch.mean(torch.sum(data['non_score_reward'], axis=1))
stats = {
'objective/kl': mean_kl,
'objective/kl_dist': kl,
'objective/logprobs': data['logprobs'],
'objective/ref_logprobs': data['ref_logprobs'],
'objective/kl_coef': kl_coef,
'objective/entropy': mean_entropy,
'ppo/mean_non_score_reward': mean_non_score_reward,
}
for k, v in data['train_stats'].items():
stats[f'ppo/{k}'] = torch.mean(v, axis=0)
stats['ppo/val/var_explained'] = 1 - stats['ppo/val/error'] / stats['ppo/returns/var']
return stats
###Output
_____no_output_____
###Markdown
PPO for transformer models> A Pytorch implementation of Proximal Policy Optimization for transfomer models. This follows the language model approach proposed in paper ["Fine-Tuning Language Models from Human Preferences"](https://arxiv.org/pdf/1909.08593.pdf) and is similar to the [original implementation](https://github.com/openai/lm-human-preferences). The two main differences are 1) the method is implemented in Pytorch and 2) works with the `transformer` library by Hugging Face.
###Code
# default_exp ppo
# export
import numpy as np
import torch.nn.functional as F
from torch.optim import Adam
import torch
import collections
import time
import random
from trl.core import (logprobs_from_logits,
whiten,
clip_by_value,
entropy_from_logits,
flatten_dict,
average_torch_dicts,
stats_to_np,
stack_dicts,
add_suffix)
###Output
_____no_output_____
###Markdown
KL-controllersTo ensure that the learned policy does not deviate to much from the original language model the KL divergence between the policy and a reference policy (the language model before PPO training) is used as an additional reward signal. Large KL-divergences are punished and staying close to the reference is rewarded.Two controllers are presented in the paper: an adaptive log-space proportional controller and a fixed controller.
###Code
# exports
class AdaptiveKLController:
"""
Adaptive KL controller described in the paper:
https://arxiv.org/pdf/1909.08593.pdf
"""
def __init__(self, init_kl_coef, target, horizon):
self.value = init_kl_coef
self.target = target
self.horizon = horizon
def update(self, current, n_steps):
target = self.target
proportional_error = np.clip(current / target - 1, -0.2, 0.2)
mult = 1 + proportional_error * n_steps / self.horizon
self.value *= mult
# exports
class FixedKLController:
"""Fixed KL controller."""
def __init__(self, kl_coef):
self.value = kl_coef
def update(self, current, n_steps):
pass
# exports
class PPOTrainer:
"""
The PPO_trainer uses Proximal Policy Optimization to optimise language models.
"""
default_params = {
"lr": 1.41e-5,
"adap_kl_ctrl": True,
"init_kl_coef":0.2,
"target": 6,
"horizon":10000,
"gamma":1,
"lam":0.95,
"cliprange": .2,
"cliprange_value":.2,
"vf_coef":.1,
"batch_size": 256,
"forward_batch_size": 16,
"ppo_epochs": 4,
}
def __init__(self, model, ref_model, **ppo_params):
"""
Initialize PPOTrainer.
Args:
model (torch.model): Hugging Face transformer GPT2 model with value head
ref_model (torch.model): Hugging Face transformer GPT2 refrence model used for KL penalty
ppo_params (dict or None): PPO parameters for training. Can include following keys:
'lr' (float): Adam learning rate, default: 1.41e-5
'batch_size' (int): Number of samples per optimisation step, default: 256
'forward_batch_size' (int): Number of samples forward passed through model at a time, default: 16
'ppo_epochs' (int): Number of optimisation epochs per batch of samples, default: 4
'gamma' (float)): Gamma parameter for advantage calculation, default: 1.
'lam' (float): Lambda parameter for advantage calcualation, default: 0.95
'cliprange_value' (float): Range for clipping values in loss calculation, default: 0.2
'cliprange' (float): Range for clipping in PPO policy gradient loss, default: 0.2
'vf_coef' (float): Scaling factor for value loss, default: 0.1
'adap_kl_ctrl' (bool): Use adaptive KL control, otherwise linear, default: True
'init_kl_coef' (float): Initial KL penalty coefficient (used for adaptive and linear control), default: 0.2
'target' (float): Target KL value for adaptive KL control, default: 6.0
'horizon' (float): Horizon for adaptive KL control, default: 10000
"""
self.ppo_params = self.default_params
self.ppo_params.update(ppo_params)
self.ref_model = ref_model
self.model = model
self.optimizer = Adam(model.parameters(), lr=self.ppo_params['lr'])
if self.ppo_params['adap_kl_ctrl']:
self.kl_ctl = AdaptiveKLController(self.ppo_params['init_kl_coef'],
self.ppo_params['target'],
self.ppo_params['horizon'])
else:
self.kl_ctl = FixedKLController(self.ppo_params['init_kl_coef'])
def step(self, query, response, scores):
"""
Run a PPO optimisation step.
args:
query (torch.tensor): tensor containing the encoded queries, shape [batch_size, query_length]
response (torch.tensor): tensor containing the encoded responses, shape [batch_size, response_length]
scores (torch.tensor): tensor containing the scores, shape [batch_size]
returns:
train_stats (dict): a summary of the training statistics
"""
bs = self.ppo_params['batch_size']
timing = dict()
t0 = time.time()
gen_len = response.shape[1]
model_input = torch.cat((query, response), axis=1)
t = time.time()
logprobs, ref_logprobs, values = self.batched_forward_pass(model_input, gen_len)
timing['time/ppo/forward_pass'] = time.time()-t
t = time.time()
rewards, non_score_reward, kl_coef = self.compute_rewards(scores, logprobs, ref_logprobs)
timing['time/ppo/compute_rewards'] = time.time()-t
t = time.time()
all_stats = []
idxs = list(range(bs))
for _ in range(self.ppo_params['ppo_epochs']):
random.shuffle(idxs)
for i in range(bs):
idx = idxs[i]
train_stats = self.train_minibatch(logprobs[idx:idx+1], values[idx:idx+1],
rewards[idx:idx+1], query[idx:idx+1],
response[idx:idx+1], model_input[idx:idx+1])
all_stats.append(train_stats)
timing['time/ppo/optimize_step'] = time.time()-t
t = time.time()
train_stats = stack_dicts(all_stats)
# reshape advantages/ratios such that they are not averaged.
train_stats['policy/advantages'] = torch.flatten(train_stats['policy/advantages']).unsqueeze(0)
train_stats['policy/ratio'] = torch.flatten(train_stats['policy/ratio']).unsqueeze(0)
stats = self.record_step_stats(scores=scores, logprobs=logprobs, ref_logprobs=ref_logprobs,
non_score_reward=non_score_reward, train_stats=train_stats,
kl_coef=kl_coef)
stats = stats_to_np(stats)
timing['time/ppo/calc_stats'] = time.time()-t
self.kl_ctl.update(stats['objective/kl'], self.ppo_params['batch_size'])
timing['time/ppo/total'] = time.time()-t0
stats.update(timing)
return stats
def batched_forward_pass(self, model_input, gen_len):
"""Calculate model outputs in multiple batches."""
bs = self.ppo_params['batch_size']
fbs = self.ppo_params['forward_batch_size']
logprobs = []
ref_logprobs = []
values = []
for i in range(int(self.ppo_params['batch_size']/fbs)):
m_input = model_input[i*fbs:(i+1)*fbs]
logits, _, v = self.model(m_input)
ref_logits, _, _ = self.ref_model(m_input)
values.append(v[:, -gen_len-1:-1].detach())
logprobs.append(logprobs_from_logits(logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
ref_logprobs.append(logprobs_from_logits(ref_logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
return torch.cat(logprobs), torch.cat(ref_logprobs), torch.cat(values)
def train_minibatch(self, logprobs, values, rewards, query, response, model_input):
"""Train one PPO minibatch"""
loss_p, loss_v, train_stats = self.loss(logprobs, values, rewards, query, response, model_input)
loss = loss_p + loss_v
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return train_stats
def compute_rewards(self, scores, logprobs, ref_logprobs):
"""Compute per token rewards from scores and KL-penalty."""
kl = logprobs - ref_logprobs
non_score_reward = -self.kl_ctl.value * kl
rewards = non_score_reward.clone().detach()
rewards[:, -1] += scores
return rewards, non_score_reward, self.kl_ctl.value
def loss(self, old_logprobs, values, rewards, query, response, model_input):
"""Calculate policy and value losses."""
lastgaelam = 0
advantages_reversed = []
gen_len = response.shape[1]
for t in reversed(range(gen_len)):
nextvalues = values[:, t + 1] if t < gen_len - 1 else 0.0
delta = rewards[:, t] + self.ppo_params['gamma'] * nextvalues - values[:, t]
lastgaelam = delta + self.ppo_params['gamma'] * self.ppo_params['lam'] * lastgaelam
advantages_reversed.append(lastgaelam)
advantages = torch.stack(advantages_reversed[::-1]).transpose(0, 1)
returns = advantages + values
advantages = whiten(advantages)
advantages = advantages.detach()
logits, _, vpred = self.model(model_input)
logprob = logprobs_from_logits(logits[:,:-1,:], model_input[:, 1:])
#only the generation part of the values/logprobs is needed
logprob, vpred = logprob[:, -gen_len:], vpred[:,-gen_len-1:-1]
vpredclipped = clip_by_value(vpred,
values - self.ppo_params["cliprange_value"],
values + self.ppo_params["cliprange_value"])
vf_losses1 = (vpred - returns)**2
vf_losses2 = (vpredclipped - returns)**2
vf_loss = .5 * torch.mean(torch.max(vf_losses1, vf_losses2))
vf_clipfrac = torch.mean(torch.gt(vf_losses2, vf_losses1).double())
ratio = torch.exp(logprob - old_logprobs)
pg_losses = -advantages * ratio
pg_losses2 = -advantages * torch.clamp(ratio,
1.0 - self.ppo_params['cliprange'],
1.0 + self.ppo_params['cliprange'])
pg_loss = torch.mean(torch.max(pg_losses, pg_losses2))
pg_clipfrac = torch.mean(torch.gt(pg_losses2, pg_losses).double())
loss = pg_loss + self.ppo_params['vf_coef'] * vf_loss
entropy = torch.mean(entropy_from_logits(logits))
approxkl = .5 * torch.mean((logprob - old_logprobs)**2)
policykl = torch.mean(logprob - old_logprobs)
return_mean, return_var = torch.mean(returns), torch.var(returns)
value_mean, value_var = torch.mean(values), torch.var(values)
stats = dict(
loss=dict(policy=pg_loss, value=vf_loss, total=loss),
policy=dict(entropy=entropy, approxkl=approxkl,policykl=policykl, clipfrac=pg_clipfrac,
advantages=advantages, advantages_mean=torch.mean(advantages), ratio=ratio),
returns=dict(mean=return_mean, var=return_var),
val=dict(vpred=torch.mean(vpred), error=torch.mean((vpred - returns) ** 2),
clipfrac=vf_clipfrac, mean=value_mean, var=value_var),
)
return pg_loss, self.ppo_params['vf_coef'] * vf_loss, flatten_dict(stats)
def record_step_stats(self, kl_coef, **data):
"""Record training step statistics."""
kl = data['logprobs'] - data['ref_logprobs']
mean_kl = torch.mean(torch.sum(kl, axis=-1))
mean_entropy = torch.mean(torch.sum(-data['logprobs'], axis=1))
mean_non_score_reward =torch.mean(torch.sum(data['non_score_reward'], axis=1))
stats = {
'objective/kl': mean_kl,
'objective/kl_dist': kl,
'objective/logprobs': data['logprobs'],
'objective/ref_logprobs': data['ref_logprobs'],
'objective/kl_coef': kl_coef,
'objective/entropy': mean_entropy,
'ppo/mean_non_score_reward': mean_non_score_reward,
}
for k, v in data['train_stats'].items():
stats[f'ppo/{k}'] = torch.mean(v, axis=0)
stats['ppo/val/var_explained'] = 1 - stats['ppo/val/error'] / stats['ppo/returns/var']
return stats
###Output
_____no_output_____
###Markdown
PPO for transformer models> A Pytorch implementation of Proximal Policy Optimization for transfomer models. This follows the language model approach proposed in paper ["Fine-Tuning Language Models from Human Preferences"](https://arxiv.org/pdf/1909.08593.pdf) and is similar to the [original implementation](https://github.com/openai/lm-human-preferences). The two main differences are 1) the method is implemented in Pytorch and 2) works with the `transformer` library by Hugging Face.
###Code
# default_exp ppo
# export
import numpy as np
import torch.nn.functional as F
from torch.optim import Adam
import torch
import collections
import time
import random
from trl.core import (logprobs_from_logits,
whiten,
clip_by_value,
entropy_from_logits,
flatten_dict,
average_torch_dicts,
stats_to_np,
stack_dicts,
add_suffix)
###Output
_____no_output_____
###Markdown
KL-controllersTo ensure that the learned policy does not deviate to much from the original language model the KL divergence between the policy and a reference policy (the language model before PPO training) is used as an additional reward signal. Large KL-divergences are punished and staying close to the reference is rewarded.Two controllers are presented in the paper: an adaptive log-space proportional controller and a fixed controller.
###Code
# exports
class AdaptiveKLController:
"""
Adaptive KL controller described in the paper:
https://arxiv.org/pdf/1909.08593.pdf
"""
def __init__(self, init_kl_coef, target, horizon):
self.value = init_kl_coef
self.target = target
self.horizon = horizon
def update(self, current, n_steps):
target = self.target
proportional_error = np.clip(current / target - 1, -0.2, 0.2)
mult = 1 + proportional_error * n_steps / self.horizon
self.value *= mult
# exports
class FixedKLController:
"""Fixed KL controller."""
def __init__(self, kl_coef):
self.value = kl_coef
def update(self, current, n_steps):
pass
# exports
class PPOTrainer:
"""
The PPO_trainer uses Proximal Policy Optimization to optimise language models.
"""
default_params = {
"lr": 1.41e-5,
"adap_kl_ctrl": True,
"init_kl_coef":0.2,
"target": 6,
"horizon":10000,
"gamma":1,
"lam":0.95,
"cliprange": .2,
"cliprange_value":.2,
"vf_coef":.1,
"batch_size": 256,
"forward_batch_size": 16,
"ppo_epochs": 4,
}
def __init__(self, model, ref_model, **ppo_params):
"""
Initialize PPOTrainer.
Args:
model (torch.model): Hugging Face transformer GPT2 model with value head
ref_model (torch.model): Hugging Face transformer GPT2 refrence model used for KL penalty
ppo_params (dict or None): PPO parameters for training. Can include following keys:
'lr' (float): Adam learning rate, default: 1.41e-5
'batch_size' (int): Number of samples per optimisation step, default: 256
'forward_batch_size' (int): Number of samples forward passed through model at a time, default: 16
'ppo_epochs' (int): Number of optimisation epochs per batch of samples, default: 4
'gamma' (float)): Gamma parameter for advantage calculation, default: 1.
'lam' (float): Lambda parameter for advantage calcualation, default: 0.95
'cliprange_value' (float): Range for clipping values in loss calculation, default: 0.2
'cliprange' (float): Range for clipping in PPO policy gradient loss, default: 0.2
'vf_coef' (float): Scaling factor for value loss, default: 0.1
'adap_kl_ctrl' (bool): Use adaptive KL control, otherwise linear, default: True
'init_kl_coef' (float): Initial KL penalty coefficient (used for adaptive and linear control), default: 0.2
'target' (float): Target KL value for adaptive KL control, default: 6.0
'horizon' (float): Horizon for adaptive KL control, default: 10000
"""
self.ppo_params = self.default_params
self.ppo_params.update(ppo_params)
self.ref_model = ref_model
self.model = model
self.optimizer = Adam(model.parameters(), lr=self.ppo_params['lr'])
self.kl_ctl = AdaptiveKLController(self.ppo_params['init_kl_coef'],
self.ppo_params['target'],
self.ppo_params['horizon'])
def step(self, query, response, scores):
"""
Run a PPO optimisation step.
args:
query (torch.tensor): tensor containing the encoded queries, shape [batch_size, query_length]
response (torch.tensor): tensor containing the encoded responses, shape [batch_size, response_length]
scores (torch.tensor): tensor containing the scores, shape [batch_size]
returns:
train_stats (dict): a summary of the training statistics
"""
bs = self.ppo_params['batch_size']
timing = dict()
t0 = time.time()
gen_len = response.shape[1]
model_input = torch.cat((query, response), axis=1)
t = time.time()
logprobs, ref_logprobs, values = self.batched_forward_pass(model_input, gen_len)
timing['time/ppo/forward_pass'] = time.time()-t
t = time.time()
rewards, non_score_reward, kl_coef = self.compute_rewards(scores, logprobs, ref_logprobs)
timing['time/ppo/compute_rewards'] = time.time()-t
t = time.time()
all_stats = []
idxs = list(range(bs))
for _ in range(self.ppo_params['ppo_epochs']):
random.shuffle(idxs)
for i in range(bs):
idx = idxs[i]
train_stats = self.train_minibatch(logprobs[idx:idx+1], values[idx:idx+1],
rewards[idx:idx+1], query[idx:idx+1],
response[idx:idx+1], model_input[idx:idx+1])
all_stats.append(train_stats)
timing['time/ppo/optimize_step'] = time.time()-t
t = time.time()
train_stats = stack_dicts(all_stats)
# reshape advantages/ratios such that they are not averaged.
train_stats['policy/advantages'] = torch.flatten(train_stats['policy/advantages']).unsqueeze(0)
train_stats['policy/ratio'] = torch.flatten(train_stats['policy/ratio']).unsqueeze(0)
stats = self.record_step_stats(scores=scores, logprobs=logprobs, ref_logprobs=ref_logprobs,
non_score_reward=non_score_reward, train_stats=train_stats,
kl_coef=kl_coef)
stats = stats_to_np(stats)
timing['time/ppo/calc_stats'] = time.time()-t
self.kl_ctl.update(stats['objective/kl'], self.ppo_params['batch_size'])
timing['time/ppo/total'] = time.time()-t0
stats.update(timing)
return stats
def batched_forward_pass(self, model_input, gen_len):
"""Calculate model outputs in multiple batches."""
bs = self.ppo_params['batch_size']
fbs = self.ppo_params['forward_batch_size']
logprobs = []
ref_logprobs = []
values = []
for i in range(int(self.ppo_params['batch_size']/fbs)):
m_input = model_input[i*fbs:(i+1)*fbs]
logits, _, v = self.model(m_input)
ref_logits, _, _ = self.ref_model(m_input)
values.append(v[:, -gen_len-1:-1].detach())
logprobs.append(logprobs_from_logits(logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
ref_logprobs.append(logprobs_from_logits(ref_logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
return torch.cat(logprobs), torch.cat(ref_logprobs), torch.cat(values)
def train_minibatch(self, logprobs, values, rewards, query, response, model_input):
"""Train one PPO minibatch"""
loss_p, loss_v, train_stats = self.loss(logprobs, values, rewards, query, response, model_input)
loss = loss_p + loss_v
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return train_stats
def compute_rewards(self, scores, logprobs, ref_logprobs):
"""Compute per token rewards from scores and KL-penalty."""
kl = logprobs - ref_logprobs
non_score_reward = -self.kl_ctl.value * kl
rewards = non_score_reward.clone().detach()
rewards[:, -1] += scores
return rewards, non_score_reward, self.kl_ctl.value
def loss(self, old_logprobs, values, rewards, query, response, model_input):
"""Calculate policy and value losses."""
lastgaelam = 0
advantages_reversed = []
gen_len = response.shape[1]
for t in reversed(range(gen_len)):
nextvalues = values[:, t + 1] if t < gen_len - 1 else 0.0
delta = rewards[:, t] + self.ppo_params['gamma'] * nextvalues - values[:, t]
lastgaelam = delta + self.ppo_params['gamma'] * self.ppo_params['lam'] * lastgaelam
advantages_reversed.append(lastgaelam)
advantages = torch.stack(advantages_reversed[::-1]).transpose(0, 1)
returns = advantages + values
advantages = whiten(advantages)
advantages = advantages.detach()
logits, _, vpred = self.model(model_input)
logprob = logprobs_from_logits(logits[:,:-1,:], model_input[:, 1:])
#only the generation part of the values/logprobs is needed
logprob, vpred = logprob[:, -gen_len:], vpred[:,-gen_len-1:-1]
vpredclipped = clip_by_value(vpred,
values - self.ppo_params["cliprange_value"],
values + self.ppo_params["cliprange_value"])
vf_losses1 = (vpred - returns)**2
vf_losses2 = (vpredclipped - returns)**2
vf_loss = .5 * torch.mean(torch.max(vf_losses1, vf_losses2))
vf_clipfrac = torch.mean(torch.gt(vf_losses2, vf_losses1).double())
ratio = torch.exp(logprob - old_logprobs)
pg_losses = -advantages * ratio
pg_losses2 = -advantages * torch.clamp(ratio,
1.0 - self.ppo_params['cliprange'],
1.0 + self.ppo_params['cliprange'])
pg_loss = torch.mean(torch.max(pg_losses, pg_losses2))
pg_clipfrac = torch.mean(torch.gt(pg_losses2, pg_losses).double())
loss = pg_loss + self.ppo_params['vf_coef'] * vf_loss
entropy = torch.mean(entropy_from_logits(logits))
approxkl = .5 * torch.mean((logprob - old_logprobs)**2)
policykl = torch.mean(logprob - old_logprobs)
return_mean, return_var = torch.mean(returns), torch.var(returns)
value_mean, value_var = torch.mean(values), torch.var(values)
stats = dict(
loss=dict(policy=pg_loss, value=vf_loss, total=loss),
policy=dict(entropy=entropy, approxkl=approxkl,policykl=policykl, clipfrac=pg_clipfrac,
advantages=advantages, advantages_mean=torch.mean(advantages), ratio=ratio),
returns=dict(mean=return_mean, var=return_var),
val=dict(vpred=torch.mean(vpred), error=torch.mean((vpred - returns) ** 2),
clipfrac=vf_clipfrac, mean=value_mean, var=value_var),
)
return pg_loss, self.ppo_params['vf_coef'] * vf_loss, flatten_dict(stats)
def record_step_stats(self, kl_coef, **data):
"""Record training step statistics."""
kl = data['logprobs'] - data['ref_logprobs']
mean_kl = torch.mean(torch.sum(kl, axis=-1))
mean_entropy = torch.mean(torch.sum(-data['logprobs'], axis=1))
mean_non_score_reward =torch.mean(torch.sum(data['non_score_reward'], axis=1))
stats = {
'objective/kl': mean_kl,
'objective/kl_dist': kl,
'objective/logprobs': data['logprobs'],
'objective/ref_logprobs': data['ref_logprobs'],
'objective/kl_coef': kl_coef,
'objective/entropy': mean_entropy,
'ppo/mean_non_score_reward': mean_non_score_reward,
}
for k, v in data['train_stats'].items():
stats[f'ppo/{k}'] = torch.mean(v, axis=0)
stats['ppo/val/var_explained'] = 1 - stats['ppo/val/error'] / stats['ppo/returns/var']
return stats
###Output
_____no_output_____
###Markdown
PPO for transformer models> A Pytorch implementation of Proximal Policy Optimization for transfomer models. This follows the language model approach proposed in paper ["Fine-Tuning Language Models from Human Preferences"](https://arxiv.org/pdf/1909.08593.pdf) and is similar to the [original implementation](https://github.com/openai/lm-human-preferences). The two main differences are 1) the method is implemented in Pytorch and 2) works with the `transformer` library by Hugging Face.
###Code
# default_exp ppo
# export
import numpy as np
import torch.nn.functional as F
from torch.optim import Adam
import torch
import collections
import time
import random
from trl.core import (logprobs_from_logits,
whiten,
clip_by_value,
entropy_from_logits,
flatten_dict,
average_torch_dicts,
stats_to_np,
stack_dicts,
add_suffix)
###Output
_____no_output_____
###Markdown
KL-controllersTo ensure that the learned policy does not deviate to much from the original language model the KL divergence between the policy and a reference policy (the language model before PPO training) is used as an additional reward signal. Large KL-divergences are punished and staying close to the reference is rewarded.Two controllers are presented in the paper: an adaptive log-space proportional controller and a fixed controller.
###Code
# exports
class AdaptiveKLController:
"""
Adaptive KL controller described in the paper:
https://arxiv.org/pdf/1909.08593.pdf
"""
def __init__(self, init_kl_coef, target, horizon):
self.value = init_kl_coef
self.target = target
self.horizon = horizon
def update(self, current, n_steps):
target = self.target
proportional_error = np.clip(current / target - 1, -0.2, 0.2)
mult = 1 + proportional_error * n_steps / self.horizon
self.value *= mult
# exports
class FixedKLController:
"""Fixed KL controller."""
def __init__(self, kl_coef):
self.value = kl_coef
def update(self, current, n_steps):
pass
# exports
class PPOTrainer:
"""
The PPO_trainer uses Proximal Policy Optimization to optimise language models.
"""
default_params = {
"lr": 1.41e-5,
"adap_kl_ctrl": True,
"init_kl_coef":0.2,
"target": 6,
"horizon":10000,
"gamma":1,
"lam":0.95,
"cliprange": .2,
"cliprange_value":.2,
"vf_coef":.1,
"batch_size": 256,
"forward_batch_size": 16,
"ppo_epochs": 4,
}
def __init__(self, model, ref_model, **ppo_params):
"""
Initialize PPOTrainer.
Args:
model (torch.model): Hugging Face transformer GPT2 model with value head
ref_model (torch.model): Hugging Face transformer GPT2 refrence model used for KL penalty
ppo_params (dict or None): PPO parameters for training. Can include following keys:
'lr' (float): Adam learning rate, default: 1.41e-5
'batch_size' (int): Number of samples per optimisation step, default: 256
'forward_batch_size' (int): Number of samples forward passed through model at a time, default: 16
'ppo_epochs' (int): Number of optimisation epochs per batch of samples, default: 4
'gamma' (float)): Gamma parameter for advantage calculation, default: 1.
'lam' (float): Lambda parameter for advantage calcualation, default: 0.95
'cliprange_value' (float): Range for clipping values in loss calculation, default: 0.2
'cliprange' (float): Range for clipping in PPO policy gradient loss, default: 0.2
'vf_coef' (float): Scaling factor for value loss, default: 0.1
'adap_kl_ctrl' (bool): Use adaptive KL control, otherwise linear, default: True
'init_kl_coef' (float): Initial KL penalty coefficient (used for adaptive and linear control), default: 0.2
'target' (float): Target KL value for adaptive KL control, default: 6.0
'horizon' (float): Horizon for adaptive KL control, default: 10000
"""
self.ppo_params = self.default_params
self.ppo_params.update(ppo_params)
self.ref_model = ref_model
self.model = model
self.optimizer = Adam(model.parameters(), lr=self.ppo_params['lr'])
self.kl_ctl = AdaptiveKLController(self.ppo_params['init_kl_coef'],
self.ppo_params['target'],
self.ppo_params['horizon'])
def step(self, query, response, scores):
"""
Run a PPO optimisation step.
args:
query (torch.tensor): tensor containing the encoded queries, shape [batch_size, query_length]
response (torch.tensor): tensor containing the encoded responses, shape [batch_size, response_length]
scores (torch.tensor): tensor containing the scores, shape [batch_size]
returns:
train_stats (dict): a summary of the training statistics
"""
bs = self.ppo_params['batch_size']
timing = dict()
t0 = time.time()
gen_len = response.shape[1]
model_input = torch.cat((query, response), axis=1)
t = time.time()
logprobs, ref_logprobs, values = self.batched_forward_pass(model_input, gen_len)
timing['time/ppo/forward_pass'] = time.time()-t
t = time.time()
rewards, non_score_reward, kl_coef = self.compute_rewards(scores, logprobs, ref_logprobs)
timing['time/ppo/compute_rewards'] = time.time()-t
t = time.time()
all_stats = []
idxs = list(range(bs))
for _ in range(self.ppo_params['ppo_epochs']):
random.shuffle(idxs)
for i in range(bs):
idx = idxs[i]
train_stats = self.train_minibatch(logprobs[idx:idx+1], values[idx:idx+1],
rewards[idx:idx+1], query[idx:idx+1],
response[idx:idx+1], model_input[idx:idx+1])
all_stats.append(train_stats)
timing['time/ppo/optimize_step'] = time.time()-t
t = time.time()
train_stats = stack_dicts(all_stats)
# reshape advantages/ratios such that they are not averaged.
train_stats['policy/advantages'] = torch.flatten(train_stats['policy/advantages']).unsqueeze(0)
train_stats['policy/ratio'] = torch.flatten(train_stats['policy/ratio']).unsqueeze(0)
stats = self.record_step_stats(scores=scores, logprobs=logprobs, ref_logprobs=ref_logprobs,
non_score_reward=non_score_reward, train_stats=train_stats,
kl_coef=kl_coef)
stats = stats_to_np(stats)
timing['time/ppo/calc_stats'] = time.time()-t
self.kl_ctl.update(stats['objective/kl'], self.ppo_params['batch_size'])
timing['time/ppo/total'] = time.time()-t0
stats.update(timing)
return stats
def batched_forward_pass(self, model_input, gen_len):
"""Calculate model outputs in multiple batches."""
bs = self.ppo_params['batch_size']
fbs = self.ppo_params['forward_batch_size']
logprobs = []
ref_logprobs = []
values = []
for i in range(int(self.ppo_params['batch_size']/fbs)):
m_input = model_input[i*fbs:(i+1)*fbs]
logits, _, v = self.model(m_input)
ref_logits, _, _ = self.ref_model(m_input)
values.append(v[:, -gen_len-1:-1].detach())
logprobs.append(logprobs_from_logits(logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
ref_logprobs.append(logprobs_from_logits(ref_logits[:,:-1,:], m_input[:,1:])[:, -gen_len:].detach())
return torch.cat(logprobs), torch.cat(ref_logprobs), torch.cat(values)
def train_minibatch(self, logprobs, values, rewards, query, response, model_input):
"""Train one PPO minibatch"""
loss_p, loss_v, train_stats = self.loss(logprobs, values, rewards, query, response, model_input)
loss = loss_p + loss_v
self.optimizer.zero_grad()
loss.backward()
self.optimizer.step()
return train_stats
def compute_rewards(self, scores, logprobs, ref_logprobs):
"""Compute per token rewards from scores and KL-penalty."""
kl = logprobs - ref_logprobs
non_score_reward = -self.kl_ctl.value * kl
rewards = non_score_reward.clone().detach()
rewards[:, -1] += scores
return rewards, non_score_reward, self.kl_ctl.value
def loss(self, old_logprobs, values, rewards, query, response, model_input):
"""Calculate policy and value losses."""
lastgaelam = 0
advantages_reversed = []
gen_len = response.shape[1]
for t in reversed(range(gen_len)):
nextvalues = values[:, t + 1] if t < gen_len - 1 else 0.0
delta = rewards[:, t] + self.ppo_params['gamma'] * nextvalues - values[:, t]
lastgaelam = delta + self.ppo_params['gamma'] * self.ppo_params['lam'] * lastgaelam
advantages_reversed.append(lastgaelam)
advantages = torch.stack(advantages_reversed[::-1]).transpose(0, 1)
returns = advantages + values
advantages = whiten(advantages)
advantages = advantages.detach()
logits, _, vpred = self.model(model_input)
logprob = logprobs_from_logits(logits[:,:-1,:], model_input[:, 1:])
#only the generation part of the values/logprobs is needed
logprob, vpred = logprob[:, -gen_len:], vpred[:,-gen_len-1:-1]
vpredclipped = clip_by_value(vpred,
values - self.ppo_params["cliprange_value"],
values + self.ppo_params["cliprange_value"])
vf_losses1 = (vpred - returns)**2
vf_losses2 = (vpredclipped - returns)**2
vf_loss = .5 * torch.mean(torch.max(vf_losses1, vf_losses2))
vf_clipfrac = torch.mean(torch.gt(vf_losses2, vf_losses1).double())
ratio = torch.exp(logprob - old_logprobs)
pg_losses = -advantages * ratio
pg_losses2 = -advantages * torch.clamp(ratio,
1.0 - self.ppo_params['cliprange'],
1.0 + self.ppo_params['cliprange'])
pg_loss = torch.mean(torch.max(pg_losses, pg_losses2))
pg_clipfrac = torch.mean(torch.gt(pg_losses2, pg_losses).double())
loss = pg_loss + self.ppo_params['vf_coef'] * vf_loss
entropy = torch.mean(entropy_from_logits(logits))
approxkl = .5 * torch.mean((logprob - old_logprobs)**2)
policykl = torch.mean(logprob - old_logprobs)
return_mean, return_var = torch.mean(returns), torch.var(returns)
value_mean, value_var = torch.mean(values), torch.var(values)
stats = dict(
loss=dict(policy=pg_loss, value=vf_loss, total=loss),
policy=dict(entropy=entropy, approxkl=approxkl,policykl=policykl, clipfrac=pg_clipfrac,
advantages=advantages, advantages_mean=torch.mean(advantages), ratio=ratio),
returns=dict(mean=return_mean, var=return_var),
val=dict(vpred=torch.mean(vpred), error=torch.mean((vpred - returns) ** 2),
clipfrac=vf_clipfrac, mean=value_mean, var=value_var),
)
return pg_loss, self.ppo_params['vf_coef'] * vf_loss, flatten_dict(stats)
def record_step_stats(self, kl_coef, **data):
"""Record training step statistics."""
kl = data['logprobs'] - data['ref_logprobs']
mean_kl = torch.mean(torch.sum(kl, axis=-1))
mean_entropy = torch.mean(torch.sum(-data['logprobs'], axis=1))
mean_non_score_reward =torch.mean(torch.sum(data['non_score_reward'], axis=1))
stats = {
'objective/kl': mean_kl,
'objective/kl_dist': kl,
'objective/logprobs': data['logprobs'],
'objective/ref_logprobs': data['ref_logprobs'],
'objective/kl_coef': kl_coef,
'objective/entropy': mean_entropy,
'ppo/mean_non_score_reward': mean_non_score_reward,
}
for k, v in data['train_stats'].items():
stats[f'ppo/{k}'] = torch.mean(v, axis=0)
stats['ppo/val/var_explained'] = 1 - stats['ppo/val/error'] / stats['ppo/returns/var']
return stats
###Output
_____no_output_____ |
notebooks/Learning Units/Models/Supervised/Linear Regression.ipynb | ###Markdown
Linear RegressionIn this learning unit, we will dive into linear regression, one of the most commonly used techniques for regression tasks.It is assumed that you have read the previous learning units on regression and that you have a working understanding of Python and the Numpy library, which are covered in previous learning units as well. Practical exampleLet's say that we are a real estate agent and we would like to know for how much we could try to sell a house, based on some specifications about it. We have access to a large database containing information about houses such as the number of bedrooms, whether they have a garage, the square footage, and so on. We also have access to how much these houses were sold for. Surely we can make use of this information to make better guesses than by solely relying on our gut feeling.If you had to write down a rule of thumb to estimate the price of a house if you were given some information about it, what would that rule be?You could of course guess the same price regardless of the house, but surely, a shack in the middle of the woods will cost less than a mansion in Beverly Hills. So there must be a better way!From experience, you might have realized that bigger houses generally sell for more. The more bedrooms, special features like a garden, a garage, etc. usually tend to influence the prize of the house. So how could you use this logic and transpose that as a rule that you can follow mechanically?You could define how much each bedroom costs, or how much having a garage will hike up the price of the house. That way, you simply have to look at your features, calculate how much each of these will cost, and add them all together. Of course, there might be a minimum price for which every house will go for.In code, it could look like this
###Code
# Define costs
minimum_cost = 80000
bedroom_cost = 25000
garage_cost = 10000
square_foot_price = 100
def price(n_bedrooms, n_garages, square_footage):
""" returns the price of a house in dollars """
return minimum_cost + bedroom_cost * n_bedrooms + garage_cost * n_garages + square_foot_price * square_footage
###Output
_____no_output_____
###Markdown
Now, we can estimate the price of houses!
###Code
price(n_bedrooms=3, n_garages = 1, square_footage = 1000)
###Output
_____no_output_____
###Markdown
Of course, in this example, we just gave you the cost of each feature, but what if you did not know them beforehand? How could you find costs that will give you good estimates for house prices?This is what linear regression tries to accomplish. It automatically finds the proper _costs_ such that if you add them all together, you will make the best guess about the price of the house. What does the model look like?One of the most important part of machine learning is to understand what the model is, what's under the hood.A linear regression model will be defined by a collection of weights or coefficients related to features, such that, if given features, we can compute an estimate the following way:```pythondef predict(features): return coefficients.dot(features)```The model can be represented by its coefficients, meaning that the size of the model is proportional to the number of features, not the size of the training data. The offsetTo be complete, it is important to add an offset to the features. This allows the model to account for some value like the minimum price of the house as we described in the previous section. The offset will simply be a "dummy" feature that will always equal 1. How does the model learn?Of course, what we will want to do is train our model such that it automatically finds the best coefficients.As explained in the learning unit about regression, a better model is traditionally the one that minizes the mean-squared error. Therefore, we could sample random coefficients and then pick the ones that optimize this criterion. In theory, if we sample enough coefficients, we should be able to eventually find the best ones. Obviously, in practice, this does not work. As there is an infinite number of possible coefficients, we cannot sample them all, and therefore we might miss out on the best coefficients and be stuck with suboptimal ones.So how can we be smarter?Let's take an example with one feature `x` and a target variable `y`. Obviously, the same principles apply to cases with more features.> It is also important to note that linear regression only works with numerical features, meaning that categorical features would have to be transformed into numerical features.> Linear regression cannot handle missing values, and therefore, either data points with missing values should be omitted, or inputation should be performed.> See the learning unit about preprocessing for more information
###Code
import numpy as np
# Create the data
n_samples = 20
x = np.arange(n_samples)
y = np.arange(n_samples)
# Plotting the data
import matplotlib.pyplot as plt
plt.figure(1,figsize=(10,5))
plt.title("Simple linear regression problem: The data")
plt.xlabel("feature")
plt.ylabel("target")
plt.plot(x,y,'bx')
plt.show()
###Output
_____no_output_____
###Markdown
Obviously, you can clearly see that all of these points sit on the line `y = x` with offset = 0. But how can the computer see it?Let's say that the model first starts with a random guess about the coefficients.
###Code
# Initialize the coefficients with random values
coefficients = np.random.rand(2)
print("coefficients =",coefficients)
def predict(coefficients):
""" Function computing the output of the model based on coefficients """
return coefficients[0] + coefficients[1] * x
# Plot the target variable as well as the output of the model
plt.figure(1,figsize=(10,5))
plt.title("Simple linear regression problem: Random guess for the coefficients")
plt.xlabel("feature")
plt.ylabel("target")
target, = plt.plot(x, y,'bx')
model, = plt.plot(x, predict(coefficients),'r')
plt.legend([target, model], ["Target values", "Current model"])
plt.show()
###Output
coefficients = [ 0.73699517 0.05088406]
###Markdown
As a human, if you had to tell the computer how to improve, you would probably say something in the line of "your line should be steeper". This is something that the computer can do by adjusting the current coefficients of the model.This raises two questions:- How can the computer know whether to increase or decrease each coefficient?- How much should the computer increase or decrease each coefficient?The first problem is being solved by the computer as humans would. It will try increasing and decreasing each coefficient and see which direction helps by looking at which decreases the mean-squared error. Hopefully, if you brush off your calculus knowledge, you'll realize that we just described computing the derivative of the mean-squared error with respect to each coefficient!In code, computing the mean-squared error with respect to the coefficients will look like this:
###Code
def MSE(coefficients):
return np.mean((predict(coefficients)-y)**2)
print("Current MSE =", MSE(coefficients))
###Output
Current MSE = 98.5041803484
###Markdown
Thankfully, the mean-squared error is a function that has a derivative that can be derived analytically. Its derivative is defined as:
###Code
def derivative_MSE(coefficients):
# Compute error
error = predict(coefficients) - y
# Adding the offset to the features
features = np.array([[1, feature] for feature in x])
return 0.5 * error.dot(features)/features.shape[0]
print("Current derivatives =", derivative_MSE(coefficients))
###Output
Current derivatives = [ -4.13980313 -55.10718225]
###Markdown
The sign of the derivative will tell us whether we need to increase or decrease each coefficient. Since we are trying to minimize the function, a negative derivative means that we have to increase the coefficient and vice versa.This can be easily shown by actually increasing and decreasing each coefficient by a small value and then computing the mean-squared error. If the mean-squared error is smaller, it means that this direction is good.
###Code
epsilon = 0.1
coef = np.array([0.5,0.5])
# Current MSE
current = MSE(coef)
print("Current MSE =", current)
# Current derivative (negative values mean that the coefficients must be increased)
print("Current deratives = ", derivative_MSE(coef),"\n")
print("decrease of MSE by increasing coef[0] = ", current - MSE(np.array([coef[0] + epsilon, coef[1]])))
print("decrease of MSE by decreasing coef[0] = ", current - MSE(np.array([coef[0] - epsilon, coef[1]])))
print("decrease of MSE by increasing coef[1] = ", current - MSE(np.array([coef[0], coef[1] + epsilon])))
print("decrease of MSE by decreasing coef[1] = ", current - MSE(np.array([coef[0], coef[1] - epsilon])))
###Output
Current MSE = 26.375
Current deratives = [ -2.125 -28.5 ]
decrease of MSE by increasing coef[0] = 0.84
decrease of MSE by decreasing coef[0] = -0.86
decrease of MSE by increasing coef[1] = 10.165
decrease of MSE by decreasing coef[1] = -12.635
###Markdown
Now we need to answer the second question: "how much should we modify the current coefficients". Indeed, modifying them too much might backfire and modifying them too little might make the training very slow.Thankfully, the derivatives do not only provide us with the direction in which we should modify our coefficients, but they also give us an indication of how _useful_ it is to modify them. Indeed, a derivative with a high absolute value will indicate that a large change in the function can be achieved by modifying the coefficient in that direction. Vice versa, a derivative close to zero indicates that little to no improvement will be achieved.We can therefore use our derivatives as an indicator of how much we should modify each coefficient. In order to be even more in control, we will introduce a learning rate $\alpha$ which will scale these modifications. A higher learning-rate means that the modifications will be _harsher_, and vice versa.Updating the coefficients will look like this:
###Code
def update_coefficients(coefficients, learning_rate):
""" function updating coefficients based on their derivatives """
derivatives = derivative_MSE(coefficients)
return coefficients - learning_rate*derivatives
def run_update(learning_rate):
""" updates coefficients, prints information about the update and plot it """
print("Current coefficients =", coefficients)
print("Current MSE = ", MSE(coefficients))
print("Current derivatives =", derivative_MSE(coefficients),"\n")
new_coefficients = update_coefficients(coefficients,learning_rate)
print("New coefficients =", new_coefficients)
print("New MSE = ", MSE(new_coefficients))
print("New derivatives =", derivative_MSE(new_coefficients),"\n")
plt.figure(1,figsize=(10,5))
plt.title("Simple linear regression problem: Updated")
plt.xlabel("feature")
plt.ylabel("target")
target, = plt.plot(x, y,'bx')
old, = plt.plot(x, predict(coefficients),'r')
new, = plt.plot(x, predict(new_coefficients),'g')
plt.legend([target, old, new], ['Target variable', 'Old coefficients', 'New coefficients'])
plt.show()
###Output
_____no_output_____
###Markdown
First, let's see what happens when we pick a good learning rate.
###Code
run_update(learning_rate=0.02)
###Output
Current coefficients = [ 0.73699517 0.05088406]
Current MSE = 98.5041803484
Current derivatives = [ -4.13980313 -55.10718225]
New coefficients = [ 0.81979123 1.1530277 ]
New MSE = 5.94768089539
New derivatives = [ 1.13677721 13.34346913]
###Markdown
As we can see, the model with the updated coefficients is closer to the target variable, and therefore has a smaller mean-squared error. We can also see that the derivatives themselves have decreased, indicating that we are getting _closer_ to finding the optimal coefficients.Now let's see what happens when the learning rate is too high.
###Code
run_update(learning_rate=0.1)
###Output
Current coefficients = [ 0.73699517 0.05088406]
Current MSE = 98.5041803484
Current derivatives = [ -4.13980313 -55.10718225]
New coefficients = [ 1.15097548 5.56160228]
New MSE = 2670.89490215
New derivatives = [ 22.24309859 287.14607461]
###Markdown
As you can see, the line did move in the direction of the target variable, but went too far, which ended up being worse than before we did anything at all. You can also see that the derivatives are now even larger than before, meaning that if we performed another update, we would be even worse off than before.Now, we'll see what happens when the learning rate happens to be too small.
###Code
run_update(learning_rate=0.00005)
###Output
Current coefficients = [ 0.73699517 0.05088406]
Current MSE = 98.5041803484
Current derivatives = [ -4.13980313 -55.10718225]
New coefficients = [ 0.73720216 0.05363942]
New MSE = 97.894340939
New derivatives = [ -4.12661168 -54.93605562]
###Markdown
You can see that, whilst not being as bad as setting it too high, setting the learning rate too low will result in very slow updates, which in the worst case could lead to the training stopping early.Now that we've shown how to update the coefficients once, we can keep on going! Indeed, we can update the coefficients until we reach a stopping criterion. This can be a certain number of update steps, a certain amount of time, or most commonly, whenever the MSE decreases by less than a factor _epsilon_. This method is most commonly known as gradient descent.Doing the latter will look like this:
###Code
def find_best_coefficients(original_coefficients, learning_rate, epsilon):
""" runs gradient descent to find the best coefficients """
coef = original_coefficients.copy()
old_MSE = float('inf')
new_MSE = MSE(coef)
# Loops until convergence
while(old_MSE - new_MSE > epsilon):
old_MSE = MSE(coef)
# Update the coefficients
coef -= learning_rate * derivative_MSE(coef)
new_MSE = MSE(coef)
# Plot the results
plt.figure(1,figsize=(10,5))
plt.title("Simple linear regression problem: solved")
plt.xlabel("feature")
plt.ylabel("target")
target, = plt.plot(x, y,'bx')
old, = plt.plot(x, predict(coefficients),'r')
new, = plt.plot(x, predict(coef),'g')
plt.legend([target, old, new], ['Target variable', 'Original coefficients', 'Final coefficients'])
plt.show()
return {"MSE":MSE(coef), "coefficients":coef}
find_best_coefficients(original_coefficients = coefficients, learning_rate = 0.002, epsilon = 0.000001)
###Output
_____no_output_____
###Markdown
As you can see, we managed to find weights that _almost_ fit our data perfectly. This may not always happen. Indeed, if the data contains noise, a linear model might not be able to represent the data perfectly. However, the technique that we described will still find the best line explaining the data.Here is an example below.
###Code
noise_level = 0.5
# Add gaussian noise to the data
y = y + (2*np.random.normal(0,noise_level,len(y)) - 1)
original_coef = np.random.rand(2)
find_best_coefficients(original_coefficients = original_coef, learning_rate = 0.001, epsilon = 0.0001)
###Output
_____no_output_____ |
examples/gallery/demos/bokeh/mandelbrot_section.ipynb | ###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - mandelbrot section](../matplotlib/mandelbrot_section.ipynb)HoloViews demo that used to be showcased on the [holoviews.org
###Code
import numpy as np
import holoviews as hv
from holoviews import opts
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Load the data
###Code
import io
try: from urllib2 import urlopen
except: from urllib.request import urlopen
raw = urlopen('http://assets.holoviews.org/data/mandelbrot.npy').read()
array = np.load(io.BytesIO(raw)).astype(np.float32)[::4,::4]
###Output
_____no_output_____
###Markdown
Plot
###Code
dots = np.linspace(-0.45, 0.45, 19)
fractal = hv.Image(array)
# First example on the old holoviews.org homepage was:
# ((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0))
layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +
fractal.sample(y=y) +
hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y)['z'], 90)) +
hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y)['z'], 60)]))
for y in np.linspace(-0.3, 0.3, 11)} # Half the frames of the bokeh version
composition = hv.HoloMap(layouts, kdims='Y').collate().cols(2)
composition.options(opts.Contours(show_legend=False), opts.Points(scaling_factor=50))
###Output
_____no_output_____
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - mandelbrot section](../matplotlib/mandelbrot_section.ipynb)HoloViews demo that used to be showcased on the [holoviews.org
###Code
import numpy as np
import holoviews as hv
from holoviews import dim, opts
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Load the data
###Code
import io
try: from urllib2 import urlopen
except: from urllib.request import urlopen
raw = urlopen('http://assets.holoviews.org/data/mandelbrot.npy').read()
array = np.load(io.BytesIO(raw)).astype(np.float16)[::2,::2]
###Output
_____no_output_____
###Markdown
Plot
###Code
dots = np.linspace(-0.45, 0.45, 19)
fractal = hv.Image(array)
# First example on the old holoviews.org homepage was:
# ((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0))
layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +
fractal.sample(y=y) +
hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y)['z'], 90)) +
hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y)['z'], 60)]))
for y in np.linspace(-0.3, 0.3, 21)}
layout = hv.HoloMap(layouts, kdims='Y').collate()
layout.opts(
opts.Contours(color='w', show_legend=False),
opts.Points(size=dim('z')*10)).cols(2)
###Output
_____no_output_____
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - mandelbrot section](../matplotlib/mandelbrot_section.ipynb)HoloViews demo that used to be showcased on the [holoviews.org
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Load the data
###Code
import io
try: from urllib2 import urlopen
except: from urllib.request import urlopen
raw = urlopen('http://assets.holoviews.org/data/mandelbrot.npy').read()
array = np.load(io.BytesIO(raw)).astype(np.float32)[::4,::4]
###Output
_____no_output_____
###Markdown
Plot
###Code
%%opts Points [scaling_factor=50] Contours [show_legend=False] (color='w')
dots = np.linspace(-0.45, 0.45, 19)
fractal = hv.Image(array)
# First example on the old holoviews.org homepage was:
# ((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0))
layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +
fractal.sample(y=y) +
hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y)['z'], 90)) +
hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y)['z'], 60)]))
for y in np.linspace(-0.3, 0.3, 11)} # Half the frames of the bokeh version
hv.HoloMap(layouts, kdims=['Y']).collate().cols(2)
###Output
_____no_output_____
###Markdown
Most examples work across multiple plotting backends, this example is also available for:* [Matplotlib - mandelbrot section](../matplotlib/mandelbrot_section.ipynb)HoloViews demo that used to be showcased on the [holoviews.org
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
Load the data
###Code
import io
try: from urllib2 import urlopen
except: from urllib.request import urlopen
raw = urlopen('http://assets.holoviews.org/data/mandelbrot.npy').read()
array = np.load(io.BytesIO(raw)).astype(np.float32)[::4,::4]
###Output
_____no_output_____
###Markdown
Plot
###Code
%%opts Points [scaling_factor=50] Contours [show_legend=False] (color='w')
dots = np.linspace(-0.45, 0.45, 19)
fractal = hv.Image(array)
# First example on the old holoviews.org homepage was:
# ((fractal * hv.HLine(y=0)).hist() + fractal.sample(y=0))
layouts = {y: (fractal * hv.Points(fractal.sample([(i,y) for i in dots])) +
fractal.sample(y=y) +
hv.operation.threshold(fractal, level=np.percentile(fractal.sample(y=y)['z'], 90)) +
hv.operation.contours(fractal, levels=[np.percentile(fractal.sample(y=y)['z'], 60)]))
for y in np.linspace(-0.3, 0.3, 11)} # Half the frames of the bokeh version
hv.HoloMap(layouts, kdims='Y').collate().cols(2)
###Output
_____no_output_____ |
code/chap05mine.ipynb | ###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census - un
abs(census - un)
abs(census - un) / un
abs(census - un) / un * 100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
shift = .4
t_0 = get_first_label(census)
t_r = 1970
t_end = get_last_label(census)
p_0 = census[t_0]
p_r = census[t_r]
p_end = get_last_value(census)
elapsed_time_range = t_end - t_r
total_growth_range = p_end - p_r
annual_growth_range = total_growth_range / elapsed_time_range
results = TimeSeries()
results[t_0] = census[t_0]-shift
results
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig03.pdf')
###Output
Saving figure to file figs/chap03-fig03.pdf
###Markdown
Modeling and Simulation in PythonChapter 5: DesignCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# To switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number or fraction of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
init
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
init
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update1(state, system):
"""Update the SIR model.
state: State with variables S, I, R
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update1(init, system)
state
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update1)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
tc = 4 # time between contacts in days
tr = 5 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
s_start = system.init.S
s_start
end = run_simulation(system, update1)
s_end = end.S
s_end
change_in_s = s_start - s_end
change_in_s
###Output
_____no_output_____
###Markdown
Using Series objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, system)
S[t+1], I[t+1], R[t+1] = state
system.S = S
system.I = I
system.R = R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
run_simulation(system, update1)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', color='blue', label='Susceptible')
plot(I, '-', color='red', label='Infected')
plot(R, ':', color='green', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(system.S, system.I, system.R)
savefig('chap05-fig01.pdf')
###Output
Saving figure to file chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `loc` to indicate which row we want to assign the results to. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a DataFrame to the System: results
system: System object
update_func: function that updates state
"""
frame = DataFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], system)
system.results = frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
sir = make_system(beta, gamma)
run_simulation(system, update1)
system.results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
frame = system.results
plot_results(frame.S, frame.I, frame.R)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 days and plot the results.
###Code
tc = 4
tr = 5
beta = 1/tc
gamma = 1/tr
sir = make_system(beta, gamma)
end = run_simulation(system, update1)
frame = system.results
plot_results(frame.S, frame.I, frame.R)
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(system):
"""Fraction of population infected during the simulation.
system: System object with results.
returns: fraction of population
"""
frame = system.results
return frame.S[system.t0] - frame.S[system.t_end]
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
system.beta = 0.333
system.gamma = 0.25
run_simulation(system, update1)
print(system.beta, system.gamma, calc_total_infected(system))
###Output
0.333 0.25 0.467162931836
###Markdown
**Exercise:** Write functions that take a `System` object as a parameter, extract the `results` object from it, and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
def peak_infected(sytsem):
frame = system.results
return frame.I.max()
def outbreak_day(system):
frame = system.results
return frame.I.idxmax()
def sick_at_end(system):
frame = system.results
return frame.I[system.t_end]
print(peak_infected(system), outbreak_day(system), sick_at_end(system))
###Output
0.0435362026876 30 0.000674194315603
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
system.beta, system.gamma
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
run_simulation(system, update1)
calc_total_infected(system)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
run_simulation(system2, update1)
calc_total_infected(system2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(system.results.S, '-', label='No immunization')
plot(system2.results.S, 'g--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('chap05-fig02.pdf')
###Output
Saving figure to file chap05-fig02.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
print(fraction, calc_total_infected(system))
###Output
0.0 0.468320811029
0.1 0.30650802854
0.2 0.161365457006
0.3 0.0728155898425
0.4 0.035520216753
0.5 0.0196887157825
0.6 0.0116220579983
0.7 0.00683873780062
0.8 0.00369649625371
0.9 0.00148153267227
1.0 -0.000161212109412
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
sweep[fraction] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('chap05-fig03.pdf')
###Output
Saving figure to file chap05-fig03.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
spending
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
savefig('chap05-fig04.pdf')
###Output
Saving figure to file chap05-fig04.pdf
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have.
###Code
def compute_factor1(spending):
return logistic(spending, M=800, K=0.6, B=0.07)
percent_reduction1 = compute_factor1(spending)*100
plot(spending, percent_reduction1)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
def compute_factor2(spending):
return logistic(spending, A=0, K=45)
percent_reduction2 = compute_factor2(spending)
plot(spending, percent_reduction2)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
###Output
_____no_output_____
###Markdown
Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(spending, system.beta, calc_total_infected(system))
###Output
0.0 0.332887143272 0.466770231236
100.0 0.332134252669 0.464141650401
200.0 0.330171608455 0.457217006313
300.0 0.325386471865 0.439887202912
400.0 0.315403905242 0.401630646271
500.0 0.3 0.33703425949
600.0 0.284596094758 0.267317030568
700.0 0.274613528135 0.22184699046
800.0 0.269828391545 0.200791598416
900.0 0.267865747331 0.192392183393
1000.0 0.267112856728 0.189213207818
1100.0 0.26683150821 0.18803175228
1200.0 0.266727403413 0.187595503995
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `Sweep` object.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[spending] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('chap05-fig05.pdf')
###Output
Saving figure to file chap05-fig05.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(doses, system.init.S, system.beta, calc_total_infected(system))
###Output
0.0 0.988888888889 0.266727403413 0.187595503995
1.0 0.977777777778 0.26683150821 0.174580718826
2.0 0.966666666667 0.267112856728 0.162909838349
3.0 0.955555555556 0.267865747331 0.153508349478
4.0 0.944444444444 0.269828391545 0.148565092315
5.0 0.933333333333 0.274613528135 0.152945950611
6.0 0.922222222222 0.284596094758 0.174964415024
7.0 0.911111111111 0.3 0.217343161684
8.0 0.9 0.315403905242 0.259071044488
9.0 0.888888888889 0.325386471865 0.278402884103
10.0 0.877777777778 0.330171608455 0.277914534623
11.0 0.866666666667 0.332134252669 0.267357496693
12.0 0.855555555556 0.332887143272 0.252796945636
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[doses] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('chap05-fig06.pdf')
###Output
Saving figure to file chap05-fig06.pdf
###Markdown
**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending?
###Code
num_students = 90
budget = 1200
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses)
max_doses
def sweep_doses1(dose_array):
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[doses] = calc_total_infected(system)
return sweep
infected_sweep1 = sweep_doses1(dose_array)
plot(infected_sweep1)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
###Output
_____no_output_____
###Markdown
**Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
def add_quarantine(system, fraction):
tr = 5 * fraction
system.gamma = 1 / tr
system3 = make_system(beta, gamma)
add_immunization(system2, 0.1)
add_quarantine(system3, .4)
add_hand_washing(system3, spending)
run_simulation(system3, update1)
calc_total_infected(system3)
calc_total_infected(system2)
#When the tr number changes and the rate of contact is decreased
#to simulate quarantine, the number of total infected decrease
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census - un
abs(census - un)
abs(census - un) / un
abs(census - un) / un * 100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1970]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
census[1950]
census[2016]
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - 1970
p_0 = census[1970]
p_end = get_last_value(census)
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
results = TimeSeries()
results[t_0] = census[t_0] - 0.4
results
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
print(census[1970])
print(get_first_value(census))
###Output
2.557628654
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2[table2.columns[0]]
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
# savefig('figs/chap03-fig01.pdf')
###Output
_____no_output_____
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
inter_diff = census - un;
inter_diff_abs = abs(inter_diff);
inter_diff_rel = inter_diff_abs/un;
perc_diff = 100 * inter_diff_rel
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
def model_census_data(data, start_year, end_year):
avg_growth = (data[end_year] - data[start_year])/(end_year - start_year)
results = TimeSeries()
results[data.index[0]] = data[start_year] - avg_growth * (start_year - data.index[0])
for i in linspace(data.index[1], end_year, (end_year - data.index[0])):
results[i] = results[i-1] + avg_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
model_census_data(census, 1970, census.index[-1])
###Output
_____no_output_____
###Markdown
Personal WorkThe purpose of the following is to redo the models done above using purely base libraries (pandas and numpy, etc., as opposed to using modsim)
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
def lin_population_model(filename, table_choice_index, fit_data_index, fit_year, fit_data_label, \
comp_data_index, comp_data_label):
"""
Takes in HTML table data of world population numbers and fits a linear population model to selected data.
filename: path to the HTML file containing the table data
table_choice_index: Select which table stored in the HTML file to use
fit_data_index: Index of the data column within the table to fit the model to
fit_year: The year to center the linear model around
fit_data_label: Concise descriptor of the data column to which the model is fitted, for plotting
comp_data_index: Index of data column to be used to compare model results and original data to
comp_data_index: Concise descriptor of the comparison data column, for plotting
Plots results of linear model
Returns nothing
"""
# Load the tables
tables = read_html(filename, header=0, index_col=0, decimal='M')
# Parse tables to find desired data
data_table = tables[table_choice_index]
data_column_to_fit = data_table.columns[fit_data_index]
data_column_to_compare = data_table.columns[comp_data_index]
data_to_fit = data_table[data_column_to_fit]
data_to_compare = data_table[data_column_to_compare]
# Create intermediate index variables
end_year = data_to_fit.index[-1]
start_year = data_to_fit.index[0]
interval = end_year - start_year
# Determine average annual population growth
avg_growth = (data_to_fit[end_year] - data_to_fit[fit_year])/(end_year - fit_year)
# Create blank Series object to hold results
results = pd.Series(np.zeros((interval+1)), index=data_to_fit.index)
# Create initial value for Series object, centered around year of fit
results[start_year] = data_to_fit[fit_year] - avg_growth * (fit_year - start_year)
# Increment population values per year, using avg_growth
for i in linspace(start_year + 1, end_year, interval):
results[i] = results[i-1] + avg_growth
# Plot results using matplotlib
plt.figure()
plt.figure(figsize=[14,11]) # make the plot larger to more easily see results
# Use the Series object's plot function to generate a plot via matplotlib
data_to_fit.plot()
data_to_compare.plot()
results.plot()
# Add labels, legend, and title
plt.xlabel('Year')
plt.ylabel('World population (billion)')
plt.title('Comparison of Linear World Population Growth Model to measured data')
plt.legend([fit_data_label, comp_data_label, 'Linear Population Model'])
lin_population_model(filename='data/World_population_estimates.html', table_choice_index=2, fit_data_index=0, \
fit_year=1970, fit_data_label='US Census Data', comp_data_index=2, comp_data_label='UN DESA')
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census - un
abs(census - un)
abs(census-un)/un
(abs(census -un)/un)*100
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
t_0 = 1970
t_end = get_last_label(census)
elapsed_time = t_end - t_0
p_0 = census[1970]
p_end = get_last_value(census)
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
results = TimeSeries()
results[t_0] = census[t_0]
results
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census.loc[1970:2016], ':', label='US Census')
plot(un.loc[1970:2016], '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
census[1970]
type(census)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5: DesignCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# To switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number or fraction of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
init
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
init
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update1(state, system):
"""Update the SIR model.
state: State with variables S, I, R
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update1(init, system)
state
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update1)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Solution goes here
tc1 = 4
tr1 = 5
beta1 = 1/ tc1
gamma1 = 1/tr1
system1 = make_system(beta1, gamma1)
final = run_simulation(system1, update1)
final
(init - final)
###Output
_____no_output_____
###Markdown
Using Series objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, system)
S[t+1], I[t+1], R[t+1] = state
system.S = S
system.I = I
system.R = R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
run_simulation(system, update1)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', color='blue', label='Susceptible')
plot(I, '-', color='red', label='Infected')
plot(R, ':', color='green', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(system.S, system.I, system.R)
savefig('chap05-fig01.pdf')
###Output
Saving figure to file chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `loc` to indicate which row we want to assign the results to. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a DataFrame to the System: results
system: System object
update_func: function that updates state
"""
frame = DataFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], system)
system.results = frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
sir = make_system(beta, gamma)
run_simulation(system, update1)
system.results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
frame = system.results
plot_results(frame.S, frame.I, frame.R)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 days and plot the results.
###Code
# Solution goes here
tc = 4
tr = 5
beta = 1 / tc
gamma = 1/ tr
sir = make_system(beta, gamma)
system.t_end = 14
run_simulation(system, update1)
system.results.head()
frame = system.results
plot_results(frame.S, frame.I, frame.R)
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(system):
"""Fraction of population infected during the simulation.
system: System object with results.
returns: fraction of population
"""
frame = system.results
return frame.S[system.t0] - frame.S[system.t_end]
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
system.beta = 0.333
system.gamma = 0.25
run_simulation(system, update1)
print(system.beta, system.gamma, calc_total_infected(system))
###Output
0.333 0.25 0.0809201122723
###Markdown
**Exercise:** Write functions that take a `System` object as a parameter, extract the `results` object from it, and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
# Solution goes here
def sick_at_peak(system):
frame = system.results
return frame.I.max()
# Solution goes here
system.beta = 0.333
system.gamma = 0.25
run_simulation(system, update1)
print(sick_at_peak(system))
# Solution goes here
def outbreak_peak(system):
frame = system.results
return frame.I.idxmax()
# Solution goes here
system.beta = 0.333
system.gamma = 0.25
run_simulation(system, update1)
print(outbreak_peak(system))
# Solution goes here
def sick_at_end(system):
frame = system.results
return frame.I[system.t_end]
# Solution goes here
system.beta = 0.333
system.gamma = 0.25
run_simulation(system, update1)
print(sick_at_end(system))
###Output
0.0281470380348
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
system.beta, system.gamma
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
run_simulation(system, update1)
calc_total_infected(system)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
run_simulation(system2, update1)
calc_total_infected(system2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(system.results.S, '-', label='No immunization')
plot(system2.results.S, 'g--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('chap05-fig02.pdf')
###Output
Saving figure to file chap05-fig02.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
print(fraction, calc_total_infected(system))
###Output
0.0 0.468320811029
0.1 0.30650802854
0.2 0.161365457006
0.3 0.0728155898425
0.4 0.035520216753
0.5 0.0196887157825
0.6 0.0116220579983
0.7 0.00683873780062
0.8 0.00369649625371
0.9 0.00148153267227
1.0 -0.000161212109412
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
sweep[fraction] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('chap05-fig03.pdf')
###Output
Saving figure to file chap05-fig03.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=0, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
spending
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=300, K=1, B=.05)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
savefig('chap05-fig04.pdf')
###Output
Saving figure to file chap05-fig04.pdf
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(spending, system.beta, calc_total_infected(system))
###Output
0.0 0.333333231366 0.468320457287
100.0 0.33331820071 0.468268310467
200.0 0.331102383025 0.460514096506
300.0 0.166666666667 0.0208697132287
400.0 0.00223095030809 9.89203320988e-05
500.0 1.51326229008e-05 6.65127738886e-07
600.0 1.01967408998e-07 4.48153214538e-09
700.0 6.87051230723e-10 3.01961788907e-11
800.0 4.62933395321e-12 2.03392858111e-13
900.0 3.11602595578e-14 1.22124532709e-15
1000.0 2.22044604925e-16 0.0
1100.0 0.0 0.0
1200.0 0.0 0.0
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `Sweep` object.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[spending] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('chap05-fig05.pdf')
###Output
Saving figure to file chap05-fig05.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(doses, system.init.S, system.beta, calc_total_infected(system))
###Output
0.0 0.988888888889 0.0 0.0
1.0 0.977777777778 0.0 0.0
2.0 0.966666666667 0.0 0.0
3.0 0.955555555556 0.0 0.0
4.0 0.944444444444 2.22044604925e-16 0.0
5.0 0.933333333333 2.59052039079e-15 0.0
6.0 0.922222222222 3.11602595578e-14 1.22124532709e-15
7.0 0.911111111111 3.79992333895e-13 1.52100554374e-14
8.0 0.9 4.62933395321e-12 1.85074178205e-13
9.0 0.888888888889 5.63965911008e-11 2.22810658812e-12
10.0 0.877777777778 6.87051230723e-10 2.68034483497e-11
11.0 0.866666666667 8.36999699179e-09 3.223995515e-10
12.0 0.855555555556 1.01967408998e-07 3.87728060769e-09
13.0 0.844444444444 1.24221309472e-06 4.66215250849e-08
14.0 0.833333333333 1.51326229008e-05 5.60495625912e-07
15.0 0.822222222222 0.000184259545641 6.73749456803e-06
16.0 0.811111111111 0.00223095030809 8.10072554519e-05
17.0 0.8 0.0252860600071 0.000977646325715
18.0 0.788888888889 0.166666666667 0.0121400118185
19.0 0.777777777778 0.308047273326 0.095103145345
20.0 0.766666666667 0.331102383025 0.130925730992
21.0 0.755555555556 0.333149073788 0.124029099319
22.0 0.744444444444 0.33331820071 0.113756010406
23.0 0.733333333333 0.33333209112 0.104005873278
24.0 0.722222222222 0.333333231366 0.0950601556148
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[doses] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('chap05-fig06.pdf')
###Output
Saving figure to file chap05-fig06.pdf
###Markdown
**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
# Solution goes here
tc = tr # no contact in quarntine until they are recovered
tr = 4
beta = 1 / tc
gamma = 1 / tr
system = make_system(beta, gamma)
sir = make_system1(beta, gamma)
run_simulation(system, update1)
print(system.results)
plot_results(frame.S, frame.I, frame.R)
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
tables[2]
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census - un
abs(census-un)
abs(census-un)/un
abs(census - un) / un * 100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
p_1 = census[1970]
elapsed_time = t_end - 1970
total_growth = p_end - p_1
annual_growth = total_growth / elapsed_time
results[t_0] = census[t_0] - abs(results[1970]-census[1970])
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '-', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
census[1970]
results[1970]
abs(results[1970]-census[1970])
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census -un
abs(census -un)
abs(census - un)/un
abs(census - un)/un * 100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
total_growth = p_end - census[1970]
elapsed_time = t_end - 1970
annual_growth = total_growth / elapsed_time
results = TimeSeries()
results[t_0] = census[t_0]- .4
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census - un
# Solution goes here
# Solution goes here
# Solution goes here
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
t_0 = get_first_label(census)
t_1 = t_0 +20
t_end = get_last_label(census)
elapsed_time = t_end - t_1
p_0 = census[1970]
p_end = get_last_value(census)
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
results = TimeSeries()
results[t_0] = census[t_0]
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
Modeling and Simulation in PythonChapter 5: DesignCopyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# If you want the figures to appear in the notebook,
# and you want to interact with them, use
# %matplotlib notebook
# If you want the figures to appear in the notebook,
# and you don't want to interact with them, use
# %matplotlib inline
# If you want the figures to appear in separate windows, use
# %matplotlib qt5
# To switch from one to another, you have to select Kernel->Restart
%matplotlib inline
from modsim import *
###Output
_____no_output_____
###Markdown
SIR implementationWe'll use a `State` object to represent the number or fraction of people in each compartment.
###Code
init = State(S=89, I=1, R=0)
init
###Output
_____no_output_____
###Markdown
To convert from number of people to fractions, we divide through by the total.
###Code
init /= sum(init)
init
###Output
_____no_output_____
###Markdown
`make_system` creates a `System` object with the given parameters.
###Code
def make_system(beta, gamma):
"""Make a system object for the SIR model.
beta: contact rate in days
gamma: recovery rate in days
returns: System object
"""
init = State(S=89, I=1, R=0)
init /= sum(init)
t0 = 0
t_end = 7 * 14
return System(init=init, t0=t0, t_end=t_end,
beta=beta, gamma=gamma)
###Output
_____no_output_____
###Markdown
Here's an example with hypothetical values for `beta` and `gamma`.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
###Output
_____no_output_____
###Markdown
The update function takes the state during the current time step and returns the state during the next time step.
###Code
def update1(state, system):
"""Update the SIR model.
state: State with variables S, I, R
system: System with beta and gamma
returns: State object
"""
s, i, r = state
infected = system.beta * i * s
recovered = system.gamma * i
s -= infected
i += infected - recovered
r += recovered
return State(S=s, I=i, R=r)
###Output
_____no_output_____
###Markdown
To run a single time step, we call it like this:
###Code
state = update1(init, system)
state
###Output
_____no_output_____
###Markdown
Now we can run a simulation by calling the update function for each time step.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
system: System object
update_func: function that updates state
returns: State object for final state
"""
state = system.init
for t in linrange(system.t0, system.t_end):
state = update_func(state, system)
return state
###Output
_____no_output_____
###Markdown
The result is the state of the system at `t_end`
###Code
run_simulation(system, update1)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. After 14 weeks, how many students, total, have been infected?Hint: what is the change in `S` between the beginning and the end of the simulation?
###Code
# Let's make a system with the right params. Thanks for the good docstrings.
system = make_system(.25, .20)
initial_S_value = system.init.S
projection = run_simulation(system, update1)
final_S_value = projection.S
difference = initial_S_value - final_S_value
difference
###Output
_____no_output_____
###Markdown
Using Series objects If we want to store the state of the system at each time step, we can use one `TimeSeries` object for each state variable.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add three Series objects to the System: S, I, R
system: System object
update_func: function that updates state
"""
S = TimeSeries()
I = TimeSeries()
R = TimeSeries()
state = system.init
t0 = system.t0
S[t0], I[t0], R[t0] = state
for t in linrange(system.t0, system.t_end):
state = update_func(state, system)
S[t+1], I[t+1], R[t+1] = state
system.S = S
system.I = I
system.R = R
###Output
_____no_output_____
###Markdown
Here's how we call it.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
run_simulation(system, update1)
###Output
_____no_output_____
###Markdown
And then we can plot the results.
###Code
def plot_results(S, I, R):
"""Plot the results of a SIR model.
S: TimeSeries
I: TimeSeries
R: TimeSeries
"""
plot(S, '--', color='blue', label='Susceptible')
plot(I, '-', color='red', label='Infected')
plot(R, ':', color='green', label='Recovered')
decorate(xlabel='Time (days)',
ylabel='Fraction of population')
###Output
_____no_output_____
###Markdown
Here's what they look like.
###Code
plot_results(system.S, system.I, system.R)
savefig('chap05-fig01.pdf')
###Output
Saving figure to file chap05-fig01.pdf
###Markdown
Using a DataFrame Instead of making three `TimeSeries` objects, we can use one `DataFrame`.We have to use `loc` to indicate which row we want to assign the results to. But then Pandas does the right thing, matching up the state variables with the columns of the `DataFrame`.
###Code
def run_simulation(system, update_func):
"""Runs a simulation of the system.
Add a DataFrame to the System: results
system: System object
update_func: function that updates state
"""
frame = DataFrame(columns=system.init.index)
frame.loc[system.t0] = system.init
for t in linrange(system.t0, system.t_end):
frame.loc[t+1] = update_func(frame.loc[t], system)
system.results = frame
###Output
_____no_output_____
###Markdown
Here's how we run it, and what the result looks like.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
sir = make_system(beta, gamma)
run_simulation(system, update1)
system.results.head()
###Output
_____no_output_____
###Markdown
We can extract the results and plot them.
###Code
frame = system.results
plot_results(frame.S, frame.I, frame.R)
###Output
_____no_output_____
###Markdown
**Exercise** Suppose the time between contacts is 4 days and the recovery time is 5 days. Simulate this scenario for 14 days and plot the results.
###Code
# Solution goes here
system = make_system(.25, .20)
simulation_results = run_simulation(system, update1)
projection = system.results
plot_results(projection.S, projection.I, projection.R)
###Output
_____no_output_____
###Markdown
Metrics Given the results, we can compute metrics that quantify whatever we are interested in, like the total number of sick students, for example.
###Code
def calc_total_infected(system):
"""Fraction of population infected during the simulation.
system: System object with results.
returns: fraction of population
"""
frame = system.results
return frame.S[system.t0] - frame.S[system.t_end]
###Output
_____no_output_____
###Markdown
Here's an example.|
###Code
system.beta = 0.333
system.gamma = 0.25
run_simulation(system, update1)
print(system.beta, system.gamma, calc_total_infected(system))
###Output
0.333 0.25 0.467162931836
###Markdown
**Exercise:** Write functions that take a `System` object as a parameter, extract the `results` object from it, and compute the other metrics mentioned in the book:1. The fraction of students who are sick at the peak of the outbreak.2. The day the outbreak peaks.3. The fraction of students who are sick at the end of the semester.Hint: If you have a `TimeSeries` called `I`, you can compute the largest value of the series like this: I.max()And the index of the largest value like this: I.idxmax()You can read about these functions in the `Series` [documentation](https://pandas.pydata.org/pandas-docs/stable/generated/pandas.Series.html).
###Code
# Solution goes here
def peak_fraction_sick(system):
"""
Returns the fraction of infected students when the amount was highest.
system : The particular situation under study.
"""
return system.results.I.max()
peak_fraction_sick(system)
# Solution goes here
def peak_time(system):
"""
Returns the time at which the fraction of infected students was highest.
system: The particular situation under study.
"""
return system.results.I.idxmax()
peak_time(system)
# Solution goes here
def end_fraction_sick(system):
"""
Returns the fraction of sick students at the end of the semester.
system: The particular situation under study.
"""
return system.results.I[system.t_end]
end_fraction_sick(system)
###Output
_____no_output_____
###Markdown
What if? We can use this model to evaluate "what if" scenarios. For example, this function models the effect of immunization by moving some fraction of the population from S to R before the simulation starts.
###Code
def add_immunization(system, fraction):
"""Immunize a fraction of the population.
Moves the given fraction from S to R.
system: System object
fraction: number from 0 to 1
"""
system.init.S -= fraction
system.init.R += fraction
###Output
_____no_output_____
###Markdown
Let's start again with the system we used in the previous sections.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
system = make_system(beta, gamma)
system.beta, system.gamma
###Output
_____no_output_____
###Markdown
And run the model without immunization.
###Code
run_simulation(system, update1)
calc_total_infected(system)
###Output
_____no_output_____
###Markdown
Now with 10% immunization.
###Code
system2 = make_system(beta, gamma)
add_immunization(system2, 0.1)
run_simulation(system2, update1)
calc_total_infected(system2)
###Output
_____no_output_____
###Markdown
10% immunization leads to a drop in infections of 16 percentage points.Here's what the time series looks like for S, with and without immunization.
###Code
plot(system.results.S, '-', label='No immunization')
plot(system2.results.S, 'g--', label='10% immunization')
decorate(xlabel='Time (days)',
ylabel='Fraction susceptible')
savefig('chap05-fig02.pdf')
###Output
Saving figure to file chap05-fig02.pdf
###Markdown
Now we can sweep through a range of values for the fraction of the population who are immunized.
###Code
immunize_array = linspace(0, 1, 11)
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
print(fraction, calc_total_infected(system))
###Output
0.0 0.468320811029
0.1 0.30650802854
0.2 0.161365457006
0.3 0.0728155898425
0.4 0.035520216753
0.5 0.0196887157825
0.6 0.0116220579983
0.7 0.00683873780062
0.8 0.00369649625371
0.9 0.00148153267227
1.0 -0.000161212109412
###Markdown
This function does the same thing and stores the results in a `Sweep` object.
###Code
def sweep_immunity(immunize_array):
"""Sweeps a range of values for immunity.
immunize_array: array of fraction immunized
returns: Sweep object
"""
sweep = SweepSeries()
for fraction in immunize_array:
system = make_system(beta, gamma)
add_immunization(system, fraction)
run_simulation(system, update1)
sweep[fraction] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
immunize_array = linspace(0, 1, 21)
infected_sweep = sweep_immunity(immunize_array)
###Output
_____no_output_____
###Markdown
And here's what the results look like.
###Code
plot(infected_sweep)
decorate(xlabel='Fraction immunized',
ylabel='Total fraction infected',
title='Fraction infected vs. immunization rate',
legend=False)
savefig('chap05-fig03.pdf')
###Output
Saving figure to file chap05-fig03.pdf
###Markdown
If 40% of the population is immunized, less than 4% of the population gets sick. Logistic function To model the effect of a hand-washing campaign, I'll use a [generalized logistic function](https://en.wikipedia.org/wiki/Generalised_logistic_function), which is a convenient function for modeling curves that have a generally sigmoid shape. The parameters of the GLF correspond to various features of the curve in a way that makes it easy to find a function that has the shape you want, based on data or background information about the scenario.
###Code
def logistic(x, A=0, B=1, C=1, M=1, K=1, Q=1, nu=1):
"""Computes the generalize logistic function.
A: controls the lower bound
B: controls the steepness of the transition
C: not all that useful, AFAIK
M: controls the location of the transition
K: controls the upper bound
Q: shift the transition left or right
nu: affects the symmetry of the transition
returns: float or array
"""
exponent = -B * (x - M)
denom = C + Q * exp(exponent)
return A + (K-A) / denom ** (1/nu)
###Output
_____no_output_____
###Markdown
The following array represents the range of possible spending.
###Code
spending = linspace(0, 1200, 21)
spending
###Output
_____no_output_____
###Markdown
`compute_factor` computes the reduction in `beta` for a given level of campaign spending.`M` is chosen so the transition happens around \$500.`K` is the maximum reduction in `beta`, 20%.`B` is chosen by trial and error to yield a curve that seems feasible.
###Code
def compute_factor(spending):
"""Reduction factor as a function of spending.
spending: dollars from 0 to 1200
returns: fractional reduction in beta
"""
return logistic(spending, M=500, K=0.2, B=0.01)
###Output
_____no_output_____
###Markdown
Here's what it looks like.
###Code
percent_reduction = compute_factor(spending) * 100
plot(spending, percent_reduction)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Percent reduction in infection rate',
title='Effect of hand washing on infection rate',
legend=False)
savefig('chap05-fig04.pdf')
# Played with parameters and then reset everything
###Output
Saving figure to file chap05-fig04.pdf
###Markdown
**Exercise:** Modify the parameters `M`, `K`, and `B`, and see what effect they have on the shape of the curve. Read about the [generalized logistic function on Wikipedia](https://en.wikipedia.org/wiki/Generalised_logistic_function). Modify the other parameters and see what effect they have. Hand washing Now we can model the effect of a hand-washing campaign by modifying `beta`
###Code
def add_hand_washing(system, spending):
"""Modifies system to model the effect of hand washing.
system: System object
spending: campaign spending in USD
"""
factor = compute_factor(spending)
system.beta *= (1 - factor)
###Output
_____no_output_____
###Markdown
Let's start with the same values of `beta` and `gamma` we've been using.
###Code
tc = 3 # time between contacts in days
tr = 4 # recovery time in days
beta = 1 / tc # contact rate in per day
gamma = 1 / tr # recovery rate in per day
beta, gamma
###Output
_____no_output_____
###Markdown
Now we can sweep different levels of campaign spending.
###Code
spending_array = linspace(0, 1200, 13)
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(spending, system.beta, calc_total_infected(system))
###Output
0.0 0.332887143272 0.466770231236
100.0 0.332134252669 0.464141650401
200.0 0.330171608455 0.457217006313
300.0 0.325386471865 0.439887202912
400.0 0.315403905242 0.401630646271
500.0 0.3 0.33703425949
600.0 0.284596094758 0.267317030568
700.0 0.274613528135 0.22184699046
800.0 0.269828391545 0.200791598416
900.0 0.267865747331 0.192392183393
1000.0 0.267112856728 0.189213207818
1100.0 0.26683150821 0.18803175228
1200.0 0.266727403413 0.187595503995
###Markdown
Here's a function that sweeps a range of spending and stores the results in a `Sweep` object.
###Code
def sweep_hand_washing(spending_array):
"""Run simulations with a range of spending.
spending_array: array of dollars from 0 to 1200
returns: Sweep object
"""
sweep = SweepSeries()
for spending in spending_array:
system = make_system(beta, gamma)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[spending] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Here's how we run it.
###Code
spending_array = linspace(0, 1200, 20)
infected_sweep = sweep_hand_washing(spending_array)
###Output
_____no_output_____
###Markdown
And here's what it looks like.
###Code
plot(infected_sweep)
decorate(xlabel='Hand-washing campaign spending (USD)',
ylabel='Total fraction infected',
title='Effect of hand washing on total infections',
legend=False)
savefig('chap05-fig05.pdf')
###Output
Saving figure to file chap05-fig05.pdf
###Markdown
Now let's put it all together to make some public health spending decisions. Optimization Suppose we have \$1200 to spend on any combination of vaccines and a hand-washing campaign.
###Code
num_students = 90
budget = 1200
price_per_dose = 100
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses)
max_doses
###Output
_____no_output_____
###Markdown
We can sweep through a range of doses from, 0 to `max_doses`, model the effects of immunization and the hand-washing campaign, and run simulations.For each scenario, we compute the fraction of students who get sick.
###Code
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
print(doses, system.init.S, system.beta, calc_total_infected(system))
###Output
0.0 0.988888888889 0.266727403413 0.187595503995
1.0 0.977777777778 0.26683150821 0.174580718826
2.0 0.966666666667 0.267112856728 0.162909838349
3.0 0.955555555556 0.267865747331 0.153508349478
4.0 0.944444444444 0.269828391545 0.148565092315
5.0 0.933333333333 0.274613528135 0.152945950611
6.0 0.922222222222 0.284596094758 0.174964415024
7.0 0.911111111111 0.3 0.217343161684
8.0 0.9 0.315403905242 0.259071044488
9.0 0.888888888889 0.325386471865 0.278402884103
10.0 0.877777777778 0.330171608455 0.277914534623
11.0 0.866666666667 0.332134252669 0.267357496693
12.0 0.855555555556 0.332887143272 0.252796945636
###Markdown
The following function wraps that loop and stores the results in a `Sweep` object.
###Code
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
run_simulation(system, update1)
sweep[doses] = calc_total_infected(system)
return sweep
###Output
_____no_output_____
###Markdown
Now we can compute the number of infected students for each possible allocation of the budget.
###Code
infected_sweep = sweep_doses(dose_array)
###Output
_____no_output_____
###Markdown
And plot the results.
###Code
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
savefig('chap05-fig06.pdf')
###Output
Saving figure to file chap05-fig06.pdf
###Markdown
**Exercise:** Suppose the price of the vaccine drops to $50 per dose. How does that affect the optimal allocation of the spending? **Exercise:** Suppose we have the option to quarantine infected students. For example, a student who feels ill might be moved to an infirmary, or a private dorm room, until they are no longer infectious.How might you incorporate the effect of quarantine in the SIR model?
###Code
# Solution goes here
num_students = 90
budget = 1200
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses)
infected_sweep = sweep_doses(dose_array)
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
# The total fraction of people infected decreases.
def add_quarantine(system, fraction_of_sick):
"""
Modifies system to model the impact of quarantining infected students.
system: System object
fraction_of_sick: The fraction of sick people actually quarantined
"""
fraction_quarantined = system.init.I * fraction_of_sick
system.beta*= (1 - fraction_quarantined)
def sweep_doses(dose_array):
"""Runs simulations with different doses and campaign spending.
dose_array: range of values for number of vaccinations
return: Sweep object with total number of infections
"""
sweep = SweepSeries()
for doses in dose_array:
fraction = doses / num_students
spending = budget - doses * price_per_dose
system = make_system(beta, gamma)
add_immunization(system, fraction)
add_hand_washing(system, spending)
add_quarantine(system, fraction)
run_simulation(system, update1)
sweep[doses] = calc_total_infected(system)
return sweep
# Solution goes here
num_students = 90
budget = 1200
price_per_dose = 50
max_doses = int(budget / price_per_dose)
dose_array = linrange(max_doses)
infected_sweep = sweep_doses(dose_array)
plot(infected_sweep)
decorate(xlabel='Doses of vaccine',
ylabel='Total fraction infected',
title='Total infections vs. doses',
legend=False)
# TUrns out quarantining people helps too. THat's pretty neat.
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census-un
abs(census-un)
abs(census-un)/un
abs(census - un) / un * 100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions.
###Code
max(abs(census-un)/census) * 100
max(abs(un-census)/un) *100
###Output
_____no_output_____
###Markdown
Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
help(linrange)
###Output
Help on function linrange in module modsim:
linrange(start=0, stop=None, step=1, **options)
Returns an array of evenly-spaced values in the interval [start, stop].
This function works best if the space between start and stop
is divisible by step; otherwise the results might be surprising.
By default, the last value in the array is `stop-step`
(at least approximately).
If you provide the keyword argument `endpoint=True`,
the last value in the array is `stop`.
start: first value
stop: last value
step: space between values
Also accepts the same keyword arguments as np.linspace. See
https://docs.scipy.org/doc/numpy/reference/generated/numpy.linspace.html
returns: array or Quantity
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
annualGrowth1970toPresentq
results3 = TimeSeries()
for t in linrange(t_0, t_end):
results3[t+1] = results[t] + annualGrowth1970toPresent)
results2 = TimeSeries()
for t in linrange(t_0, t_end):
results2[t+1] = results[t] + (census[t+1]-census[t])
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results2, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
results2
###Output
_____no_output_____
###Markdown
Modeling and Simulation in PythonChapter 5Copyright 2017 Allen DowneyLicense: [Creative Commons Attribution 4.0 International](https://creativecommons.org/licenses/by/4.0)
###Code
# Configure Jupyter so figures appear in the notebook
%matplotlib inline
# Configure Jupyter to display the assigned value after an assignment
%config InteractiveShell.ast_node_interactivity='last_expr_or_assign'
# import functions from the modsim.py module
from modsim import *
###Output
_____no_output_____
###Markdown
Reading dataPandas is a library that provides tools for reading and processing data. `read_html` reads a web page from a file or the Internet and creates one `DataFrame` for each table on the page.
###Code
from pandas import read_html
###Output
_____no_output_____
###Markdown
The data directory contains a downloaded copy of https://en.wikipedia.org/wiki/World_population_estimatesThe arguments of `read_html` specify the file to read and how to interpret the tables in the file. The result, `tables`, is a sequence of `DataFrame` objects; `len(tables)` reports the length of the sequence.
###Code
filename = 'data/World_population_estimates.html'
tables = read_html(filename, header=0, index_col=0, decimal='M')
len(tables)
###Output
_____no_output_____
###Markdown
We can select the `DataFrame` we want using the bracket operator. The tables are numbered from 0, so `tables[2]` is actually the third table on the page.`head` selects the header and the first five rows.
###Code
table2 = tables[2]
table2.head()
###Output
_____no_output_____
###Markdown
`tail` selects the last five rows.
###Code
table2.tail()
###Output
_____no_output_____
###Markdown
Long column names are awkard to work with, but we can replace them with abbreviated names.
###Code
table2.columns = ['census', 'prb', 'un', 'maddison',
'hyde', 'tanton', 'biraben', 'mj',
'thomlinson', 'durand', 'clark']
###Output
_____no_output_____
###Markdown
Here's what the DataFrame looks like now.
###Code
table2.head()
###Output
_____no_output_____
###Markdown
The first column, which is labeled `Year`, is special. It is the **index** for this `DataFrame`, which means it contains the labels for the rows.Some of the values use scientific notation; for example, `2.544000e+09` is shorthand for $2.544 \cdot 10^9$ or 2.544 billion.`NaN` is a special value that indicates missing data. SeriesWe can use dot notation to select a column from a `DataFrame`. The result is a `Series`, which is like a `DataFrame` with a single column.
###Code
census = table2.census
census.head()
census.tail()
###Output
_____no_output_____
###Markdown
Like a `DataFrame`, a `Series` contains an index, which labels the rows.`1e9` is scientific notation for $1 \cdot 10^9$ or 1 billion. From here on, we will work in units of billions.
###Code
un = table2.un / 1e9
un.head()
census = table2.census / 1e9
census.head()
###Output
_____no_output_____
###Markdown
Here's what these estimates look like.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
decorate(xlabel='Year',
ylabel='World population (billion)')
savefig('figs/chap03-fig01.pdf')
###Output
Saving figure to file figs/chap03-fig01.pdf
###Markdown
The following expression computes the elementwise differences between the two series, then divides through by the UN value to produce [relative errors](https://en.wikipedia.org/wiki/Approximation_error), then finds the largest element.So the largest relative error between the estimates is about 1.3%.
###Code
max(abs(census - un) / un) * 100
###Output
_____no_output_____
###Markdown
**Exercise:** Break down that expression into smaller steps and display the intermediate results, to make sure you understand how it works.1. Compute the elementwise differences, `census - un`2. Compute the absolute differences, `abs(census - un)`3. Compute the relative differences, `abs(census - un) / un`4. Compute the percent differences, `abs(census - un) / un * 100`
###Code
census - un
abs(census - un)
abs(census - un)/un
abs(census - un) / un * 100
###Output
_____no_output_____
###Markdown
`max` and `abs` are built-in functions provided by Python, but NumPy also provides version that are a little more general. When you import `modsim`, you get the NumPy versions of these functions. Constant growth We can select a value from a `Series` using bracket notation. Here's the first element:
###Code
census[1950]
###Output
_____no_output_____
###Markdown
And the last value.
###Code
census[2016]
###Output
_____no_output_____
###Markdown
But rather than "hard code" those dates, we can get the first and last labels from the `Series`:
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
elapsed_time = t_end - t_0
###Output
_____no_output_____
###Markdown
And we can get the first and last values:
###Code
p_0 = get_first_value(census)
p_end = get_last_value(census)
###Output
_____no_output_____
###Markdown
Then we can compute the average annual growth in billions of people per year.
###Code
total_growth = p_end - p_0
annual_growth = total_growth / elapsed_time
###Output
_____no_output_____
###Markdown
TimeSeries Now let's create a `TimeSeries` to contain values generated by a linear growth model.
###Code
results = TimeSeries()
###Output
_____no_output_____
###Markdown
Initially the `TimeSeries` is empty, but we can initialize it so the starting value, in 1950, is the 1950 population estimated by the US Census.
###Code
results[t_0] = census[t_0]
results
###Output
_____no_output_____
###Markdown
After that, the population in the model grows by a constant amount each year.
###Code
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
###Output
_____no_output_____
###Markdown
Here's what the results looks like, compared to the actual data.
###Code
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
###Markdown
The model fits the data pretty well after 1990, but not so well before. Exercises**Optional Exercise:** Try fitting the model using data from 1970 to the present, and see if that does a better job.Hint: 1. Copy the code from above and make a few changes. Test your code after each small change.2. Make sure your `TimeSeries` starts in 1950, even though the estimated annual growth is based on later data.3. You might want to add a constant to the starting value to match the data better.
###Code
t_0 = get_first_label(census)
t_end = get_last_label(census)
p_0 = get_first_value(census)
p_end = get_last_value(census)
annual_growth = (p_end - census[1970]) / (t_end - 1970)
results = TimeSeries()
results[t_0] = census[t_0]
for t in linrange(t_0, t_end):
results[t+1] = results[t] + annual_growth
plot(census, ':', label='US Census')
plot(un, '--', label='UN DESA')
plot(results, color='gray', label='model')
decorate(xlabel='Year',
ylabel='World population (billion)',
title='Constant growth')
savefig('figs/chap03-fig02.pdf')
###Output
Saving figure to file figs/chap03-fig02.pdf
|
100_Numpy_exercises_no_solution.ipynb | ###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
print(np.__version__)
print(np.show_config())
###Output
1.16.4
mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/usr/local/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/usr/local/anaconda3/include']
blas_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/usr/local/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/usr/local/anaconda3/include']
blas_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/usr/local/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/usr/local/anaconda3/include']
lapack_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/usr/local/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/usr/local/anaconda3/include']
lapack_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/usr/local/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/usr/local/anaconda3/include']
None
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
null_ten = np.zeros(10)
print(null_ten)
###Output
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
mem_size_null_ten = (null_ten.size * null_ten.itemsize)
print(f"The memory size of array null_ten is {mem_size_null_ten} bytes")
###Output
The memory size of array null_ten is 80 bytes
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆)
###Code
np.info(np.add)
###Output
add(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])
Add arguments element-wise.
Parameters
----------
x1, x2 : array_like
The arrays to be added. If ``x1.shape != x2.shape``, they must be
broadcastable to a common shape (which may be the shape of one or
the other).
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have
a shape that the inputs broadcast to. If not provided or `None`,
a freshly-allocated array is returned. A tuple (possible only as a
keyword argument) must have length equal to the number of outputs.
where : array_like, optional
Values of True indicate to calculate the ufunc at that position, values
of False indicate to leave the value in the output alone.
**kwargs
For other keyword-only arguments, see the
:ref:`ufunc docs <ufuncs.kwargs>`.
Returns
-------
add : ndarray or scalar
The sum of `x1` and `x2`, element-wise.
This is a scalar if both `x1` and `x2` are scalars.
Notes
-----
Equivalent to `x1` + `x2` in terms of array broadcasting.
Examples
--------
>>> np.add(1.0, 4.0)
5.0
>>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
>>> np.add(x1, x2)
array([[ 0., 2., 4.],
[ 3., 5., 7.],
[ 6., 8., 10.]])
###Markdown
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
null_ten[4] = 1
print(null_ten)
###Output
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
v = np.arange(10,50)
print(v)
print(type(v))
###Output
[10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49]
<class 'numpy.ndarray'>
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
v[::-1] #reverse the array using indexing. :: selects entire array, -1 step
###Output
_____no_output_____
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
t_by_t_matrix = np.array(range(9)).reshape((3,3))
print(t_by_t_matrix)
###Output
[[0 1 2]
[3 4 5]
[6 7 8]]
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
my_array = np.array([1, 2, 0, 0, 0, 4, 0])
my_array_not_null = my_array != 0
print("Indices of non_zero elements:")
print(my_array_not_null)
###Output
Indices of non_zero elements:
[ True True False False False True False]
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
np.eye(3) #OR
np.identity(3)
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
np.random.rand(3, 3, 3)
###Output
_____no_output_____
###Markdown
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
###Code
ten_by_ten = np.random.rand(10,10)
min_ten_by_ten = ten_by_ten.min()
print(min_ten_by_ten)
max_ten_by_ten = ten_by_ten.max()
print(max_ten_by_ten)
###Output
0.008017197920764385
0.9884419266661685
###Markdown
14. Create a random vector of size 30 and find the mean value (★☆☆)
###Code
v30 = np.random.rand(30)
v30_mean = v30.mean()
print(v30_mean)
###Output
0.5281540898110887
###Markdown
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
###Code
my_2d_arr = np.ones((5, 5))
print(my_2d_arr)
my_2d_arr[1:-1,1:-1] = 0 # [1:-1, 1:-1] references the 2nd through
#last rows(non_inclusive), 2nd through
#last columns(non_inclusive) setting them = 0
print("\n")
print(my_2d_arr)
###Output
[[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]]
[[1. 1. 1. 1. 1.]
[1. 0. 0. 0. 1.]
[1. 0. 0. 0. 1.]
[1. 0. 0. 0. 1.]
[1. 1. 1. 1. 1.]]
###Markdown
16. How to add a border (filled with 0's) around an existing array? (★☆☆)
###Code
my_array = np.ones((3, 3))
np.pad(my_array, pad_width=1, mode="constant", constant_values=0)
###Output
_____no_output_____
###Markdown
17. What is the result of the following expression? (★☆☆)
###Code
#```python
print(0 * np.nan)
print(np.nan == np.nan)
print(np.inf > np.nan)
print(np.nan - np.nan)
print(np.nan in set([np.nan]))
print(0.3 == 3 * 0.1)
#```
###Output
nan
False
False
nan
True
False
###Markdown
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
###Code
#five_by_five = np.zeros((5, 5))
five_by_five_diag = np.diagflat([[1,2,3,4]], -1)
print(five_by_five_diag)
###Output
[[0 0 0 0 0]
[1 0 0 0 0]
[0 2 0 0 0]
[0 0 3 0 0]
[0 0 0 4 0]]
###Markdown
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
###Code
checker_board = np.ones((8,8))
checker_board[::2, 1::2] = 0
checker_board[1::2, ::2] = 0
print(checker_board)
###Output
[[1. 0. 1. 0. 1. 0. 1. 0.]
[0. 1. 0. 1. 0. 1. 0. 1.]
[1. 0. 1. 0. 1. 0. 1. 0.]
[0. 1. 0. 1. 0. 1. 0. 1.]
[1. 0. 1. 0. 1. 0. 1. 0.]
[0. 1. 0. 1. 0. 1. 0. 1.]
[1. 0. 1. 0. 1. 0. 1. 0.]
[0. 1. 0. 1. 0. 1. 0. 1.]]
###Markdown
20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
###Code
my_array = np.array(range(336)).reshape((6, 7, 8))
print(my_array)
idx_100 = np.where(my_array == 99)
print("\n")
print(f"The index of the 100th element is {idx_100}")
###Output
[[[ 0 1 2 3 4 5 6 7]
[ 8 9 10 11 12 13 14 15]
[ 16 17 18 19 20 21 22 23]
[ 24 25 26 27 28 29 30 31]
[ 32 33 34 35 36 37 38 39]
[ 40 41 42 43 44 45 46 47]
[ 48 49 50 51 52 53 54 55]]
[[ 56 57 58 59 60 61 62 63]
[ 64 65 66 67 68 69 70 71]
[ 72 73 74 75 76 77 78 79]
[ 80 81 82 83 84 85 86 87]
[ 88 89 90 91 92 93 94 95]
[ 96 97 98 99 100 101 102 103]
[104 105 106 107 108 109 110 111]]
[[112 113 114 115 116 117 118 119]
[120 121 122 123 124 125 126 127]
[128 129 130 131 132 133 134 135]
[136 137 138 139 140 141 142 143]
[144 145 146 147 148 149 150 151]
[152 153 154 155 156 157 158 159]
[160 161 162 163 164 165 166 167]]
[[168 169 170 171 172 173 174 175]
[176 177 178 179 180 181 182 183]
[184 185 186 187 188 189 190 191]
[192 193 194 195 196 197 198 199]
[200 201 202 203 204 205 206 207]
[208 209 210 211 212 213 214 215]
[216 217 218 219 220 221 222 223]]
[[224 225 226 227 228 229 230 231]
[232 233 234 235 236 237 238 239]
[240 241 242 243 244 245 246 247]
[248 249 250 251 252 253 254 255]
[256 257 258 259 260 261 262 263]
[264 265 266 267 268 269 270 271]
[272 273 274 275 276 277 278 279]]
[[280 281 282 283 284 285 286 287]
[288 289 290 291 292 293 294 295]
[296 297 298 299 300 301 302 303]
[304 305 306 307 308 309 310 311]
[312 313 314 315 316 317 318 319]
[320 321 322 323 324 325 326 327]
[328 329 330 331 332 333 334 335]]]
The index of the 100th element is (array([1]), array([5]), array([3]))
###Markdown
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
###Code
eight = np.array([[0,1,0,1],[1,0,1,0]])
checker = np.tile(eight,(4,2))
print(checker)
###Output
[[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]]
###Markdown
22. Normalize a 5x5 random matrix (★☆☆)
###Code
five_by_five = np.random.rand(5, 5)
print(f"Array:")
print("\n")
print(five_by_five)
print("\n")
five_by_five_stdev = five_by_five.std()
print(f"Array Standard Deviation: {five_by_five_stdev}")
print("\n")
five_by_five_mean = five_by_five.mean()
print(f"Array Mean: {five_by_five_mean}")
print("\n")
five_by_five_norm = (five_by_five - five_by_five_mean) / five_by_five_stdev
print("Normalized Array")
print("\n")
print(five_by_five_norm)
###Output
Array:
[[4.21808440e-01 6.46443022e-01 2.99143223e-04 9.39625092e-01
8.93558925e-01]
[7.98082876e-01 3.86890720e-01 5.08204657e-01 3.22553403e-01
3.71699937e-01]
[4.73874931e-01 4.47698673e-01 1.05121838e-01 2.71103062e-01
1.45395282e-01]
[2.04830158e-01 8.30331294e-01 7.95105833e-01 9.05266177e-01
5.19794009e-01]
[3.52018773e-01 8.54597270e-02 4.25166428e-01 9.03460542e-01
5.93484988e-01]]
Array Standard Deviation: 0.2778819815171811
Array Mean: 0.49389111728470375
Normalized Array
[[-0.25940033 0.54898092 -1.77626477 1.60404058 1.43826457]
[ 1.09467968 -0.38505698 0.05150942 -0.61658447 -0.4397233 ]
[-0.07203125 -0.16623044 -1.39904458 -0.80173624 -1.25411455]
[-1.04022923 1.21073045 1.08396635 1.48039487 0.09321544]
[-0.51054891 -1.46980163 -0.2473161 1.47389702 0.35840349]]
###Markdown
23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
###Code
rgba = np.dtype([("R", "u1"),("G", "u1"),("B", "u1"),("A", "u1")])
print(type(rgba))
###Output
<class 'numpy.dtype'>
###Markdown
24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
###Code
a = np.array(range(15)).reshape((5,3))
b = np.array(range(6)).reshape((3, 2))
a_b_prod = a * b
###Output
_____no_output_____
###Markdown
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
###Code
arr = np.array(range(12))
arr
###Output
_____no_output_____
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
print(np.__version__)
print(np.show_config())
###Output
1.14.3
mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/anaconda3/include']
blas_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/anaconda3/include']
blas_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/anaconda3/include']
lapack_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/anaconda3/include']
lapack_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/anaconda3/include']
None
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
nullVector = np.zeros(10)
print(nullVector)
###Output
[0. 0. 0. 0. 0. 0. 0. 0. 0. 0.]
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
nullVector.itemsize * nullVector.size
###Output
_____no_output_____
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆)
###Code
np.info(np.add)
###Output
add(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])
Add arguments element-wise.
Parameters
----------
x1, x2 : array_like
The arrays to be added. If ``x1.shape != x2.shape``, they must be
broadcastable to a common shape (which may be the shape of one or
the other).
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have
a shape that the inputs broadcast to. If not provided or `None`,
a freshly-allocated array is returned. A tuple (possible only as a
keyword argument) must have length equal to the number of outputs.
where : array_like, optional
Values of True indicate to calculate the ufunc at that position, values
of False indicate to leave the value in the output alone.
**kwargs
For other keyword-only arguments, see the
:ref:`ufunc docs <ufuncs.kwargs>`.
Returns
-------
add : ndarray or scalar
The sum of `x1` and `x2`, element-wise. Returns a scalar if
both `x1` and `x2` are scalars.
Notes
-----
Equivalent to `x1` + `x2` in terms of array broadcasting.
Examples
--------
>>> np.add(1.0, 4.0)
5.0
>>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
>>> np.add(x1, x2)
array([[ 0., 2., 4.],
[ 3., 5., 7.],
[ 6., 8., 10.]])
###Markdown
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
x = np.zeros(10)
x[4] = 1
x
###Output
_____no_output_____
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
y = np.arange(10, 50)
y
###Output
_____no_output_____
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
y[::-1]
###Output
_____no_output_____
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
#threeByThree = np.array([[0, 1, 2], [3, 4, 5], [6, 7, 8]])
threeByThree = np.arange(9).reshape(3, 3)
threeByThree
###Output
_____no_output_____
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
myArr = np.array([1,2,0,0,4,0])
np.nonzero(myArr)
###Output
_____no_output_____
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
-------skippers
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
np.random.random((3, 3, 3))
###Output
_____no_output_____
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
np.__version__
###Output
_____no_output_____
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
np.empty((10))
###Output
_____no_output_____
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
arr = np.array([1,2,3])
arr.size * arr.itemsize
###Output
_____no_output_____
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆)
###Code
!python -c "import numpy; numpy.info(numpy.add)"
###Output
add(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])
Add arguments element-wise.
Parameters
----------
x1, x2 : array_like
The arrays to be added. If ``x1.shape != x2.shape``, they must be
broadcastable to a common shape (which may be the shape of one or
the other).
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have
a shape that the inputs broadcast to. If not provided or `None`,
a freshly-allocated array is returned. A tuple (possible only as a
keyword argument) must have length equal to the number of outputs.
where : array_like, optional
Values of True indicate to calculate the ufunc at that position, values
of False indicate to leave the value in the output alone.
**kwargs
For other keyword-only arguments, see the
:ref:`ufunc docs <ufuncs.kwargs>`.
Returns
-------
add : ndarray or scalar
The sum of `x1` and `x2`, element-wise.
This is a scalar if both `x1` and `x2` are scalars.
Notes
-----
Equivalent to `x1` + `x2` in terms of array broadcasting.
Examples
--------
>>> np.add(1.0, 4.0)
5.0
>>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
>>> np.add(x1, x2)
array([[ 0., 2., 4.],
[ 3., 5., 7.],
[ 6., 8., 10.]])
###Markdown
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
arr = np.empty((10))
arr[4] = 1
arr
###Output
_____no_output_____
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
np.arange(10, 50)
###Output
_____no_output_____
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
np.arange(10, 50)[::-1]
###Output
_____no_output_____
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
np.arange(9).reshape(3,3)
###Output
_____no_output_____
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
np.nonzero([1,2,0,0,4,0])
###Output
_____no_output_____
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
np.eye(3)
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
np.random.random((3, 3, 3))
###Output
_____no_output_____
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
print(np.__version__)
np.show_config()
###Output
1.14.5
blas_mkl_info:
NOT AVAILABLE
blis_info:
NOT AVAILABLE
openblas_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
blas_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_mkl_info:
NOT AVAILABLE
openblas_lapack_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
lapack_opt_info:
libraries = ['openblas', 'openblas']
library_dirs = ['/usr/local/lib']
language = c
define_macros = [('HAVE_CBLAS', None)]
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
np.zeros(10)
###Output
_____no_output_____
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
'''This can be done in 2 ways.Firstly we can use the numpy size function and multiply the
number of elements with the size of these elements(given by itemsize)
Secondly we can also do this by calling nbytes for a numpy array which basically does this job for us
'''
sample_arr=np.array([[0,0,0,0],[1,2,3,4],[2,3,4,5]])
print(sample_arr.shape)
# ?np.array
# np.size(sample_arr)*sample_arr.itemsize
sample_arr.nbytes
###Output
(3, 4)
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆)
###Code
# help(np.add)
'''We can use the help function provided by python or use the numpy info function which loads the docstring
for any given function
For Interactive Prompts,both of these return the same output
'''
np.info(np.add)
###Output
add(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])
Add arguments element-wise.
Parameters
----------
x1, x2 : array_like
The arrays to be added. If ``x1.shape != x2.shape``, they must be
broadcastable to a common shape (which may be the shape of one or
the other).
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have
a shape that the inputs broadcast to. If not provided or `None`,
a freshly-allocated array is returned. A tuple (possible only as a
keyword argument) must have length equal to the number of outputs.
where : array_like, optional
Values of True indicate to calculate the ufunc at that position, values
of False indicate to leave the value in the output alone.
**kwargs
For other keyword-only arguments, see the
:ref:`ufunc docs <ufuncs.kwargs>`.
Returns
-------
add : ndarray or scalar
The sum of `x1` and `x2`, element-wise. Returns a scalar if
both `x1` and `x2` are scalars.
Notes
-----
Equivalent to `x1` + `x2` in terms of array broadcasting.
Examples
--------
>>> np.add(1.0, 4.0)
5.0
>>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
>>> np.add(x1, x2)
array([[ 0., 2., 4.],
[ 3., 5., 7.],
[ 6., 8., 10.]])
###Markdown
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
'''Simple Indexing same as python.'''
null_vector=np.zeros(10)
null_vector[4]=1
print(null_vector)
###Output
[0. 0. 0. 0. 1. 0. 0. 0. 0. 0.]
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
'''This Problem can also be done in multiple ways.We can use the arange function given by numpy to do so
or we can create a list comprehension in python and pass it to numpy \'s asarray function '''
dynamic_vector=np.arange(10,50)
other_vector=np.asarray([i for i in range(10,50)])
# print(dynamic_vector,dynamic_vector.shape)
print(other_vector)
###Output
[10 11 12 13 14 15 16 17 18 19 20 21 22 23 24 25 26 27 28 29 30 31 32 33
34 35 36 37 38 39 40 41 42 43 44 45 46 47 48 49]
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
'''For this,we can use the slicing reverse technique used in python'''
dynamic_vector=np.arange(10,50)[::-1]
print(dynamic_vector)
# ?np.arange
###Output
[49 48 47 46 45 44 43 42 41 40 39 38 37 36 35 34 33 32 31 30 29 28 27 26
25 24 23 22 21 20 19 18 17 16 15 14 13 12 11 10]
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
'''For this,we can use the reshape method given by numpy.Basically this method can convert the source
matrix/vector into a shape of our liking if it is possible.Else it gives us an error'''
thby3=np.array(np.arange(0,9)).reshape(3,3)
# ?np.array
print(thby3,thby3.shape)
###Output
[[0 1 2]
[3 4 5]
[6 7 8]] (3, 3)
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
'''We can do this by using the where function in numpy which gives the indexes based on a given condition
or we can use the inbuilt function nonzero which can do the same job'''
non_zero=np.array([[1,2,0,0,4,0]])
print(np.where(non_zero !=0))
print(np.nonzero(non_zero))
# ??np.nonzero
###Output
(array([0, 0, 0]), array([0, 1, 4]))
(array([0, 0, 0]), array([0, 1, 4]))
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
'''Using the ones function we can do this or we can create an array and then reshape it'''
idmatrix=np.ones((3,3))
print(idmatrix)
###Output
[[1. 1. 1.]
[1. 1. 1.]
[1. 1. 1.]]
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
'''This can be done in 2 ways.firstly the way that i chose was to create a random number ndarray using m*n*k numbers(27)
in this case,and then reshape it into a (m,n,k) matrix.Or we can do this as mentioned in the solutions by directly
passing the shape to numpy random classes random method'''
new_matrix=np.random.rand(27).reshape((3,3,3))
print(new_matrix)
print(np.random.random((3,3,3)))
###Output
[[[0.07809135 0.04898578 0.47558162]
[0.9993894 0.02910353 0.71019356]
[0.25686361 0.27938524 0.69425457]]
[[0.6065161 0.10396111 0.52118283]
[0.68819857 0.67293342 0.2669946 ]
[0.37315374 0.44858262 0.5486767 ]]
[[0.69513921 0.2035966 0.69175945]
[0.76195683 0.22851447 0.15484736]
[0.95394653 0.18001367 0.50401602]]]
[[[0.48372124 0.70822352 0.52408378]
[0.26950103 0.43656129 0.93014059]
[0.78892149 0.09945082 0.64039963]]
[[0.06276586 0.00274591 0.90125876]
[0.45546816 0.08854011 0.13264544]
[0.35511955 0.27476579 0.88672656]]
[[0.70766694 0.25992707 0.29547826]
[0.35738838 0.61586472 0.61227287]
[0.81622363 0.27039751 0.22491028]]]
###Markdown
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
###Code
'''This can be done in 2 ways.firstly the way that i chose was to create a random number ndarray by directly
passing the shape to numpy random classes random method and passing the created array to the numpy min and max method.Or
we can directly call the min and max on the created array itself.'''
new_matrix_10=np.random.random((10,10))
# print(new_matrix_10)
print(np.min(new_matrix_10),np.max(new_matrix_10))
print(new_matrix_10.min(),new_matrix_10.max())
###Output
0.024457449668070286 0.9990899170408306
0.024457449668070286 0.9990899170408306
###Markdown
14. Create a random vector of size 30 and find the mean value (★☆☆)
###Code
'''We can do this by calling the mean method on our created numpy array created using the random.random method'''
new_array_30=np.random.random(30)
print(new_array_30.mean())
###Output
0.4872822640639497
###Markdown
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
###Code
'''We can do this via simple indexing logic.For any given matrix of size m X n or m X m or n X n.We need to fill zeroes
in rows from the first row and column(Because the border will cover the first row,first column and last row last column)
so this should run from the first row and first column to the last row -1 and last column -1'''
m,n=10,10
arrayzeroes=np.ones((m,n))
arrayzeroes[1:m-1,1:n-1]=0
print(arrayzeroes)
###Output
[[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
[1. 1. 1. 1. 1. 1. 1. 1. 1. 1.]]
###Markdown
16. How to add a border (filled with 0's) around an existing array? (★☆☆)
###Code
'''We can do this using the numpy pad function.For this,we can also use methods like insert,vstack and hstack.'''
m,n=4,5
outer_border_matrix=np.ones((m,n))
print(outer_border_matrix)
np.pad(outer_border_matrix,(1),'constant')
###Output
[[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]
[1. 1. 1. 1. 1.]]
###Markdown
17. What is the result of the following expression? (★☆☆) ```python0 * np.nannp.nan == np.nannp.inf > np.nannp.nan - np.nannp.nan in set([np.nan])0.3 == 3 * 0.1```
###Code
print(0*np.nan)
print(np.nan==np.nan)
print(np.inf>np.nan)
print(np.nan-np.nan)
print(np.nan in set([np.nan]))
print(0.3==3*0.1)
###Output
nan
False
False
nan
True
False
###Markdown
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
###Code
'''This can be done by using the np.diag method(Just found this function) or
we can manually use the indices of the elements to do so.Although this will not work for any other matrix,for
this question this is ok'''
fivex5matrix=np.arange(25).reshape(5,5)
# print(fivex5matrix)
fivex5matrix[1,0]=1
fivex5matrix[2,1]=2
fivex5matrix[3,2]=3
fivex5matrix[4,3]=4
print(fivex5matrix)
###Output
[[ 0 1 2 3 4]
[ 1 6 7 8 9]
[10 2 12 13 14]
[15 16 3 18 19]
[20 21 22 4 24]]
###Markdown
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
###Code
'''Hackey Method.Create a 8x8 matrix and then unroll it into one big vector using ravel.Then turn
every alternate value into a zero or one(as needed) and then call the reshape command'''
checkmatrix=np.ones((8,8)).ravel()
checkmatrix[0::2]=0
checkmatrix.reshape((8,8))
###Output
_____no_output_____
###Markdown
20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
###Code
'''This function basically returns us the index at whihc the elment is to be found.We can also pass a list of
indices to search upon also'''
newarray=np.random.random((6,7,8))
np.unravel_index(100,(6,7,8))
###Output
_____no_output_____
###Markdown
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
###Code
'''The tile function basically repeats a given array n number of times.So for this we can repeat our single row
8 no of times and the column should be just given a single value(1) so that the values are repeated along the
columnaxis just once.'''
newarray=np.array([0,1,0,1,0,1,0,1])
np.tile(newarray,((8,1)))
###Output
_____no_output_____
###Markdown
22. Normalize a 5x5 random matrix (★☆☆)
###Code
'''To normalize a matrix,means to divide the individual elements of a matrix by the determinant of the matrix.
The determinant is a special value unique to a matrix.This can be calculated by taking the minors along each
row of the matrix.Refer to this link https://en.wikipedia.org/wiki/Determinant'''
normalizematrinx=np.random.random((5,5))
determinant=np.linalg.det(normalizematrinx)
normalizematrinx/=determinant
print(normalizematrinx)
###Output
[[0.51099193 0.66140224 0.44830203 0.19321164 0.65814295]
[0.20223691 0.34620438 0.02645046 0.78087928 0.37897902]
[0.63919224 0.23977047 0.06149836 0.36376474 0.63215891]
[0.0155844 0.40818607 0.71835942 0.61074759 0.8811245 ]
[0.09774466 0.01534684 0.15358355 0.9697329 0.47281814]]
0.028139673339381156
[[18.15912801 23.50426161 15.93131621 6.86616488 23.38843613]
[ 7.18689602 12.30307047 0.93997038 27.75011902 13.46778328]
[22.71498447 8.52072694 2.185468 12.92711317 22.46504075]
[ 0.55382317 14.50571459 25.5283498 21.70414651 31.31253468]
[ 3.47355349 0.54538102 5.45790103 34.46141275 16.80254543]]
###Markdown
23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆) 24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
###Code
'''We can do this by 2 methods.Firstly we can use the dot method on a given matrix and multiply it with the second matrix
as shown.Or we can use the @ operator(overloaded operator) to perform the same operation.This syntax is becoming
increasingly common so its good to use this syntax.'''
n5X3=np.random.random((5,3))
n3x2=np.random.random((3,2))
print(n5X3 @ n3x2)
print(n5X3.dot(n3x2))
###Output
[[0.95661141 0.64514016]
[0.66807619 0.31910586]
[0.57827602 0.24663639]
[0.68355174 0.3410374 ]
[0.61122 0.33484139]]
[[0.95661141 0.64514016]
[0.66807619 0.31910586]
[0.57827602 0.24663639]
[0.68355174 0.3410374 ]
[0.61122 0.33484139]]
###Markdown
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
###Code
'''Negation in numpy means that we simply return the multiplication of the value with a single -1.Was trying to solve
this usin numpy functions when i realised i did not need those functions.'''
onedarray=np.arange(100)[::-1]
onedarray[(onedarray>3) & (onedarray<=8)] *=-1
onedarray
###Output
_____no_output_____
###Markdown
26. What is the output of the following script? (★☆☆) ```python Author: Jake VanderPlasprint(sum(range(5),-1))from numpy import *print(sum(range(5),-1))```
###Code
'''The second arg to sum reffers to the starting index from which the summation should take place.
But for the np.sum function,the second arg reffers to the axis along which the sum is to be performed and in this case,
the axis -1 reffers to a sum along the row hence we get the sum as 10'''
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
###Output
10
10
###Markdown
27. Consider an integer vector Z, which of these expressions are legal? (★☆☆) ```pythonZ**Z2 > 2Z <- Z1j*ZZ/1/1ZZ```
###Code
intvector=np.array(range(5))
intvector**intvector
intvector/1/1
intvector < -intvector
intvector<intvector>intvector
2<<intvector>>2
'''THe only expression in this set which is illegal is the 6th expression as we cannot compute less than or greater
than operations for an entire array.These operations need to be performed element wise which can be done using & or |
operators(and operator or operator)'''
###Output
_____no_output_____
###Markdown
28. What are the result of the following expressions? ```pythonnp.array(0) / np.array(0)np.array(0) // np.array(0)np.array([np.nan]).astype(int).astype(float)```
###Code
np.array(0) / np.array(0)
np.array(0) // np.array(0)
np.array([np.nan]).astype(int).astype(float)
###Output
_____no_output_____
###Markdown
29. How to round away from zero a float array ? (★☆☆) 30. How to find common values between two arrays? (★☆☆)
###Code
# np.lookfor("common")
# ?np.intersect1d
'''The intersect1d function accepts 2 arrays as parameters and then returns the common elements in both those arrays'''
firstarray=np.arange(100)
secondarray=np.arange(200)
common=np.intersect1d(firstarray,secondarray)
common
###Output
_____no_output_____
###Markdown
31. How to ignore all numpy warnings (not recommended)? (★☆☆) 32. Is the following expressions true? (★☆☆) ```pythonnp.sqrt(-1) == np.emath.sqrt(-1)```
###Code
'''The following expression is not true as both these numbers are negative,the sqrt for negative no is not defined
for np.sqrt function but is defined for the np.emath.sqrt function.'''
# ?np.emath.sqrt
# ?np.sqrt
###Output
_____no_output_____
###Markdown
33. How to get the dates of yesterday, today and tomorrow? (★☆☆)
###Code
# np.lookfor("today")
np.array(['2019-03-31','2019-04-01','2019-04-02'],dtype=datetime64)
###Output
_____no_output_____
###Markdown
34. How to get all the dates corresponding to the month of July 2016? (★★☆)
###Code
'''The dtype datetime64 can be used to render the dates for a particular month by providing the np.arange mehtod
with the starting and ending dates using which it renders the dates within the given time period'''
np.arange('2016-07-01','2016-08-01',dtype=datetime64)
###Output
_____no_output_____
###Markdown
35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆) 36. Extract the integer part of a random array using 5 different methods (★★☆) 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)
###Code
'''We can generate an initial matrix using zeros or random module.Then for each row,we can replace it with the
arange list values.Or we can append a single list of values ranging from 0,4'''
init_matrix=np.zeros((5,5))
init_matrix+=np.arange(5)
print(init_matrix)
###Output
[[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]]
###Markdown
38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)
###Code
'''The np.from iter function allows us to generate an array from a generator expression'''
generator_exp=(i for i in range(11))
print(np.fromiter(generator_exp,dtype=int))
###Output
<generator object <genexpr> at 0x7f04bca88f10>
[ 0 1 2 3 4 5 6 7 8 9 10]
###Markdown
39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
###Code
'''Using the np.linspace function we can create random numbers within a given interval range.It has an argument
endpoint which specifies whetehr to include the last elment or not.Zero is by default included so we can just
start our indexing process from the next element after zero'''
np.linspace(0,1,11,endpoint=False)[1:]
###Output
_____no_output_____
###Markdown
40. Create a random vector of size 10 and sort it (★★☆)
###Code
'''Create a random matrix using any method and just use the sort method like this or directly on the array.'''
np.sort(np.linspace(0,1,10))
###Output
_____no_output_____
###Markdown
41. How to sum a small array faster than np.sum? (★★☆)
###Code
np.lookfor('sum')
###Output
Search results for 'sum'
------------------------
numpy.sum
Sum of array elements over a given axis.
numpy.cumsum
Return the cumulative sum of the elements along a given axis.
numpy.einsum
einsum(subscripts, *operands, out=None, dtype=None, order='K',
numpy.nansum
Return the sum of array elements over a given axis treating Not a
numpy.nancumsum
Return the cumulative sum of array elements over a given axis treating Not a
numpy.einsum_path
Evaluates the lowest cost contraction order for an einsum expression by
numpy.trace
Return the sum along diagonals of the array.
numpy.ma.sum
Return the sum of the array elements over the given axis.
numpy.polyadd
Find the sum of two polynomials.
numpy.ma.cumsum
Return the cumulative sum of the array elements over the given axis.
numpy.logaddexp
Logarithm of the sum of exponentiations of the inputs.
numpy.matrix.sum
Returns the sum of the matrix elements, along the given axis.
numpy.logaddexp2
Logarithm of the sum of exponentiations of the inputs in base-2.
numpy.chararray.sum
Return the sum of the array elements over the given axis.
numpy.mask_indices
Return the indices to access (n, n) arrays, given a masking function.
numpy.chararray.cumsum
Return the cumulative sum of the elements along the given axis.
numpy.chararray.trace
Return the sum along diagonals of the array.
numpy.format_float_positional
Format a floating-point scalar as a decimal string in positional notation.
numpy.format_float_scientific
Format a floating-point scalar as a decimal string in scientific notation.
numpy.linalg.tensorsolve
Solve the tensor equation ``a x = b`` for x.
numpy.ma.MaskedArray.sum
Return the sum of the array elements over the given axis.
numpy.ma.MaskedArray.cumsum
Return the cumulative sum of the array elements over the given axis.
numpy.PackageLoader.get_pkgdocs
Return documentation summary of subpackages.
numpy.ma.MaskedArray.trace
Return the sum along diagonals of the array.
numpy.polynomial.Hermite._add
Add one Hermite series to another.
numpy.polynomial.HermiteE._add
Add one Hermite series to another.
numpy.polynomial.Laguerre._add
Add one Laguerre series to another.
numpy.polynomial.Legendre._add
Add one Legendre series to another.
numpy.polynomial.Chebyshev._add
Add one Chebyshev series to another.
numpy.polynomial.Polynomial._add
Add one polynomial to another.
numpy.AxisError.__class__.__sizeof__
__sizeof__() -> int
numpy.fv
Compute the future value.
numpy.pv
Compute the present value.
numpy.add
Add arguments element-wise.
numpy.all
Test whether all array elements along a given axis evaluate to True.
numpy.any
Test whether any array element along a given axis evaluates to True.
numpy.cov
Estimate a covariance matrix, given data and weights.
numpy.dot
Dot product of two arrays. Specifically,
numpy.irr
Return the Internal Rate of Return (IRR).
numpy.npv
Returns the NPV (Net Present Value) of a cash flow series.
numpy.std
Compute the standard deviation along the specified axis.
numpy.var
Compute the variance along the specified axis.
numpy.amax
Return the maximum of an array or maximum along an axis.
numpy.amin
Return the minimum of an array or minimum along an axis.
numpy.core.setup_common.get_api_versions
Return current C API checksum and the recorded checksum.
numpy.diag
Extract a diagonal or construct a diagonal array.
numpy.diff
Calculate the n-th discrete difference along the given axis.
numpy.in1d
Test whether each element of a 1-D array is also present in a second array.
numpy.ipmt
Compute the interest portion of a payment.
numpy.isin
Calculates `element in test_elements`, broadcasting over `element` only.
numpy.kron
Kronecker product of two arrays.
numpy.mean
Compute the arithmetic mean along the specified axis.
numpy.prod
Return the product of array elements over a given axis.
numpy.cross
Return the cross product of two (arrays of) vectors.
numpy.inner
Inner product of two arrays.
numpy.outer
Compute the outer product of two vectors.
numpy.trapz
Integrate along the given axis using the composite trapezoidal rule.
numpy.choose
Construct an array from an index array and a set of arrays to choose from.
numpy.matmul
Matrix product of two arrays.
numpy.nanstd
Compute the standard deviation along the specified axis, while
numpy.nanvar
Compute the variance along the specified axis, while ignoring NaNs.
numpy.nditer
Efficient multi-dimensional iterator object to iterate over arrays.
numpy.average
Compute the weighted average along the specified axis.
numpy.nanmean
Compute the arithmetic mean along the specified axis, ignoring NaNs.
numpy.nanprod
Return the product of array elements over a given axis treating Not a
numpy.polyfit
Least squares polynomial fit.
numpy.polyint
Return an antiderivative (indefinite integral) of a polynomial.
numpy.bincount
Count number of occurrences of each value in array of non-negative ints.
numpy.blackman
Return the Blackman window.
numpy.convolve
Returns the discrete, linear convolution of two one-dimensional sequences.
numpy.diagflat
Create a two-dimensional array with the flattened input as a diagonal.
numpy.diagonal
Return specified diagonals.
numpy.gradient
Return the gradient of an N-dimensional array.
numpy.setxor1d
Find the set exclusive-or of two arrays.
numpy.correlate
Cross-correlation of two 1-dimensional sequences.
numpy.histogram
Compute the histogram of a set of data.
numpy.piecewise
Evaluate a piecewise-defined function.
numpy.setdiff1d
Find the set difference of two arrays.
numpy.tensordot
Compute tensor dot product along specified axes for arrays >= 1-D.
numpy.vectorize
vectorize(pyfunc, otypes=None, doc=None, excluded=None, cache=False,
numpy.percentile
Compute the qth percentile of the data along the specified axis.
numpy.histogram2d
Compute the bi-dimensional histogram of two data samples.
numpy.histogramdd
Compute the multidimensional histogram of some data.
numpy.intersect1d
Find the intersection of two arrays.
numpy.ma.add
Add arguments element-wise.
numpy.array2string
Return a string representation of an array.
numpy.ma.var
Compute the variance along the specified axis.
numpy.fft.ifft
Compute the one-dimensional inverse discrete Fourier Transform.
numpy.nanpercentile
Compute the qth percentile of the data along the specified axis,
numpy.ma.inner
Inner product of two arrays.
numpy.ma.outer
Compute the outer product of two vectors.
numpy.ma.trace
a.trace(offset=0, axis1=0, axis2=1, dtype=None, out=None)
numpy.apply_over_axes
Apply a function repeatedly over multiple axes.
numpy.ma.average
Return the weighted average of array over the given axis.
numpy.ma.polyfit
Least squares polynomial fit.
numpy.linalg.cond
Compute the condition number of a matrix.
numpy.linalg.norm
Matrix or vector norm.
numpy.set_printoptions
Set printing options.
numpy.ma.convolve
Returns the discrete, linear convolution of two one-dimensional sequences.
numpy.ma.diagflat
Create a two-dimensional array with the flattened input as a diagonal.
numpy.linalg.lstsq
Return the least-squares solution to a linear matrix equation.
numpy.ufunc.reduce
Reduces `a`'s dimension by one, by applying ufunc along one axis.
numpy.bytes0.expandtabs
Return a copy of B where all tab characters are expanded using spaces.
numpy.ufunc.reduceat
Performs a (local) reduce with specified slices over a single axis.
numpy.linalg.multi_dot
Compute the dot product of two or more arrays in a single function call,
numpy.linalg.tensorinv
Compute the 'inverse' of an N-dimensional array.
numpy.str0.expandtabs
Return a copy of S where all tab characters are expanded using spaces.
numpy.ufunc.accumulate
Accumulate the result of applying the operator to all elements.
numpy.linalg.matrix_rank
Return matrix rank of array using SVD method
numpy.ma.apply_over_axes
Apply a function repeatedly over multiple axes.
numpy.ma.MaskedArray.var
Compute the variance along the specified axis.
numpy.distutils.command.sdist.sdist.write_manifest
Write the file list in 'self.filelist' (presumably as filled in
numpy.core.tests.test_numeric.TestKeepdims.sub_array.sum
Return the sum of the array elements over the given axis.
numpy.polynomial.Hermite.fit
Least squares fit to data.
numpy.polynomial.Hermite._fit
Least squares fit of Hermite series to data.
numpy.polynomial.HermiteE.fit
Least squares fit to data.
numpy.polynomial.Laguerre.fit
Least squares fit to data.
numpy.polynomial.Legendre.fit
Least squares fit to data.
numpy.polynomial.Chebyshev.fit
Least squares fit to data.
numpy.polynomial.HermiteE._fit
Least squares fit of Hermite series to data.
numpy.polynomial.Laguerre._fit
Least squares fit of Laguerre series to data.
numpy.polynomial.Legendre._fit
Least squares fit of Legendre series to data.
numpy.polynomial.Chebyshev._fit
Least squares fit of Chebyshev series to data.
numpy.polynomial.Hermite._roots
Compute the roots of a Hermite series.
numpy.polynomial.Polynomial.fit
Least squares fit to data.
numpy.polynomial.HermiteE._roots
Compute the roots of a HermiteE series.
numpy.polynomial.Laguerre._roots
Compute the roots of a Laguerre series.
numpy.polynomial.Legendre._roots
Compute the roots of a Legendre series.
numpy.polynomial.Polynomial._fit
Least-squares fit of a polynomial to data.
numpy.polynomial.Chebyshev._roots
Compute the roots of a Chebyshev series.
numpy.polynomial.Polynomial._roots
Compute the roots of a polynomial.
numpy.polynomial.hermite.hermval2d
Evaluate a 2-D Hermite series at points (x, y).
numpy.polynomial.hermite.hermval3d
Evaluate a 3-D Hermite series at points (x, y, z).
numpy.polynomial.laguerre.lagval2d
Evaluate a 2-D Laguerre series at points (x, y).
numpy.polynomial.laguerre.lagval3d
Evaluate a 3-D Laguerre series at points (x, y, z).
numpy.polynomial.legendre.legval2d
Evaluate a 2-D Legendre series at points (x, y).
numpy.polynomial.legendre.legval3d
Evaluate a 3-D Legendre series at points (x, y, z).
numpy.polynomial.hermite.hermgrid2d
Evaluate a 2-D Hermite series on the Cartesian product of x and y.
numpy.polynomial.hermite.hermgrid3d
Evaluate a 3-D Hermite series on the Cartesian product of x, y, and z.
numpy.polynomial.laguerre.laggrid2d
Evaluate a 2-D Laguerre series on the Cartesian product of x and y.
numpy.polynomial.laguerre.laggrid3d
Evaluate a 3-D Laguerre series on the Cartesian product of x, y, and z.
numpy.polynomial.legendre.leggrid2d
Evaluate a 2-D Legendre series on the Cartesian product of x and y.
numpy.polynomial.legendre.leggrid3d
Evaluate a 3-D Legendre series on the Cartesian product of x, y, and z.
numpy.polynomial.chebyshev.chebval2d
Evaluate a 2-D Chebyshev series at points (x, y).
numpy.polynomial.chebyshev.chebval3d
Evaluate a 3-D Chebyshev series at points (x, y, z).
numpy.polynomial.chebyshev.chebgrid2d
Evaluate a 2-D Chebyshev series on the Cartesian product of x and y.
numpy.polynomial.chebyshev.chebgrid3d
Evaluate a 3-D Chebyshev series on the Cartesian product of x, y, and z.
numpy.polynomial.hermite_e.hermeval2d
Evaluate a 2-D HermiteE series at points (x, y).
numpy.polynomial.hermite_e.hermeval3d
Evaluate a 3-D Hermite_e series at points (x, y, z).
numpy.polynomial.polynomial.polyval2d
Evaluate a 2-D polynomial at points (x, y).
numpy.polynomial.polynomial.polyval3d
Evaluate a 3-D polynomial at points (x, y, z).
numpy.polynomial.hermite_e.hermegrid2d
Evaluate a 2-D HermiteE series on the Cartesian product of x and y.
numpy.polynomial.hermite_e.hermegrid3d
Evaluate a 3-D HermiteE series on the Cartesian product of x, y, and z.
numpy.polynomial.polynomial.polygrid2d
Evaluate a 2-D polynomial on the Cartesian product of x and y.
numpy.polynomial.polynomial.polygrid3d
Evaluate a 3-D polynomial on the Cartesian product of x, y and z.
numpy.distutils.command.build_clib.build_clib.check_library_list
Ensure that the list of libraries is valid.
numpy.distutils.command.build_ext.build_ext.check_extensions_list
Ensure that the list of extensions (presumably provided as a
numpy.distutils.command.develop.develop.create_index.__getitem__
Return a newest-to-oldest list of distributions for `project_name`
numpy.testing.nose_tools.noseclasses.NumpyDocTestFinder._find_lineno
Return a line number of the given object's docstring. Note:
numpy.distutils.misc_util.Configuration.set_options
Configure Configuration instance.
numpy.distutils.misc_util.Configuration.add_subpackage
Add a sub-package to the current Configuration instance.
numpy.distutils.misc_util.Configuration.get_subpackage
Return list of subpackage configurations.
numpy.distutils.misc_util.Configuration.add_npy_pkg_config
Generate and install a npy-pkg config file from a template.
numpy.core.tests.test_multiarray.TestWritebackIfCopy.subTest
Return a context manager that will return the enclosed block
numpy.linalg.tests.test_deprecations.test_qr_mode_full_future_warning
Check mode='full' FutureWarning.
###Markdown
42. Consider two random array A and B, check if they are equal (★★☆)
###Code
'''The np.equal method gives us the element truthwise comparision whereas the np.allclose function gives us the
result if all the elements are equal or not.
Allclose works it both the matrixes are of the same dimensions(shape) while array equal first compares the shape
and then checks the elements.So the array_equal method is better
'''
randarray1=np.random.rand(4)
randarray2=np.random.rand(5)
# print(np.allclose(randarray1,randarray2))
print(np.array_equal(randarray1,randarray2))
# np.lookfor('equal')
# ?np.array_equal
###Output
False
###Markdown
43. Make an array immutable (read-only) (★★☆)
###Code
'''The Simplest Method would be to convert the array into a tuple(immutable object)
The other way simple way is to set a writable flag for an array(refer to the above answer)
https://stackoverflow.com/questions/5541324/immutable-numpy-array
'''
init_array=np.random.rand(10)
init_array.setflags(write=False)
###Output
_____no_output_____
###Markdown
44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆) 45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)
###Code
'''Using argmax,we can find the index of the maximum element in the array and then directly assign that index to
zero'''
init_array=np.random.rand(10)
maxval=np.argmax(init_array)
# index_max=np.where(maxval,init_array)
init_array[maxval]=0
print(init_array)
###Output
[0. 0.47079336 0.51235952 0.26298048 0.09889671 0.79884636
0.14280278 0.80566579 0.8795261 0.49992016]
###Markdown
46. Create a structured array with `x` and `y` coordinates covering the \[0,1\]x\[0,1\] area (★★☆) 47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj)) 48. Print the minimum and maximum representable value for each numpy scalar type (★★☆) 49. How to print all the values of an array? (★★☆) 50. How to find the closest value (to a given scalar) in a vector? (★★☆) 51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆) 52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆) 53. How to convert a float (32 bits) array into an integer (32 bits) in place? 54. How to read the following file? (★★☆) ```1, 2, 3, 4, 56, , , 7, 8 , , 9,10,11``` 55. What is the equivalent of enumerate for numpy arrays? (★★☆) 56. Generate a generic 2D Gaussian-like array (★★☆) 57. How to randomly place p elements in a 2D array? (★★☆) 58. Subtract the mean of each row of a matrix (★★☆)
###Code
'''Get the mean along the row(axis=1) and create it inot a new array and subtract from the original array'''
init_array=np.random.rand(9).reshape(3,3)
mean=np.mean(init_array,axis=1)
print(init_array)
print(init_array-mean)
###Output
[[0.15526401 0.36099196 0.95507526]
[0.13015837 0.27284659 0.80911787]
[0.96565706 0.09242216 0.58342427]]
[[-0.33517973 -0.04304898 0.40790743]
[-0.36028537 -0.13119436 0.26195004]
[ 0.47521332 -0.31161878 0.03625644]]
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
print(np.__version__)
###Output
1.16.5
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
np.zeros(10)
###Output
_____no_output_____
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
z = np.zeros(10)
print(z.size*z.itemsize)
###Output
80
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆) 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆) 7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
z = np.arange(10,50)
z
###Output
_____no_output_____
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
z = np.arange(10)
z= z[::-1]
print(z)
###Output
[9 8 7 6 5 4 3 2 1 0]
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
z = np.arange(0,9).reshape(3,3)
z
###Output
_____no_output_____
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
a = np.array([1,2,0,0,4,0])
z = np.nonzero(a)
print(z)
###Output
(array([0, 1, 4], dtype=int64),)
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
a = np.identity(3)
a
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
a = np.random.random((3,3,3))
a
###Output
_____no_output_____
###Markdown
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
###Code
a = np.random.random((10,10))
aMax,aMin = a.max(),a.min()
print(aMax,aMin)
###Output
0.9894527896688977 0.0033449179792053307
###Markdown
14. Create a random vector of size 30 and find the mean value (★☆☆)
###Code
a = np.random.rand(30)
print(a,a.mean())
###Output
[0.1846994 0.41626531 0.3572198 0.94343712 0.82735063 0.45271474
0.78023616 0.18597967 0.21119891 0.8124634 0.67189869 0.73030937
0.38998714 0.52991707 0.55362064 0.72072631 0.20886604 0.78750828
0.18651096 0.01735404 0.7524434 0.87639775 0.86362552 0.8380825
0.33414464 0.92022892 0.28823983 0.57098412 0.4469707 0.10179415] 0.532039173298709
###Markdown
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
###Code
a = np.ones((10,10))
a[1:-1,1:-1] = 0
a
###Output
_____no_output_____
###Markdown
16. How to add a border (filled with 0's) around an existing array? (★☆☆)
###Code
a = np.ones((5,5))
a = np.pad(a,pad_width=1,mode='constant',constant_values=0)
a
###Output
_____no_output_____
###Markdown
17. What is the result of the following expression? (★☆☆) ```pythonimport numpy as np0 * np.nannp.nan == np.nannp.inf > np.nannp.nan - np.nannp.nan in set([np.nan])0.3 == 3 * 0.1```
###Code
print(0 * np.nan)
print(np.nan == np.nan)
print(np.inf > np.nan)
print(np.nan - np.nan)
print(np.nan in set([np.nan]))
print(0.3 == 3 * 0.1)
###Output
nan
False
False
nan
True
False
###Markdown
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
###Code
a = np.diag(np.arange(1,5),k = -1)
a
###Output
_____no_output_____
###Markdown
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
###Code
Z = np.zeros((8,8),dtype=int)
Z[1::2,::2] = 1
Z[::2,1::2] = 1
print(Z)
###Output
[[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]
[0 1 0 1 0 1 0 1]
[1 0 1 0 1 0 1 0]]
###Markdown
20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
###Code
a = np.unravel_index(100,(6,7,8))
a
###Output
_____no_output_____
###Markdown
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
###Code
a = np.tile(([[0,1],[1,0]]),(4,4))
a
###Output
_____no_output_____
###Markdown
22. Normalize a 5x5 random matrix (★☆☆)
###Code
a = np.random.random((5,5))
a = (a-a.mean())/a.std()
a
###Output
_____no_output_____
###Markdown
23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
###Code
color = np.dtype([("r", np.ubyte, 1),
("g", np.ubyte, 1),
("b", np.ubyte, 1),
("a", np.ubyte, 1)])
###Output
_____no_output_____
###Markdown
24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
###Code
a = np.random.randint(0,10,(5,3))
b = np.random.randint(0,10,(3,2))
c = np.dot(a,b)
c
###Output
_____no_output_____
###Markdown
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
###Code
Z = np.arange(11)
Z[(3 < Z) & (Z <= 8)] *= -1
print(Z)
###Output
[ 0 1 2 3 -4 -5 -6 -7 -8 9 10]
###Markdown
26. What is the output of the following script? (★☆☆) ```python Author: Jake VanderPlasprint(sum(range(5),-1))from numpy import *print(sum(range(5),-1))```
###Code
print(sum(range(5),-1))
from numpy import *
print(sum(range(5),-1))
###Output
10
10
###Markdown
27. Consider an integer vector Z, which of these expressions are legal? (★☆☆) ```pythonZ**Z2 > 2Z <- Z1j*ZZ/1/1ZZ```
###Code
Z**Z
2 << Z >> 2
Z <- Z
1j*Z
Z/1/1
Z<Z>Z
###Output
_____no_output_____
###Markdown
28. What are the result of the following expressions? ```pythonimport numpy as npnp.array(0) / np.array(0)np.array(0) // np.array(0)np.array([np.nan]).astype(int).astype(float)```
###Code
print(np.array(0) / np.array(0))
print(np.array(0) // np.array(0))
print(np.array([np.nan]).astype(int).astype(float))
###Output
nan
0
[-2.14748365e+09]
###Markdown
29. How to round away from zero a float array ? (★☆☆)
###Code
z = np.random.uniform(-10,+10,10)
print(z)
a = np.ceil(np.abs(z))
a = np.copysign(a,z)
print(a)
###Output
[-8.62671653 -7.09880113 -6.89654145 6.46688051 6.58682965 0.35029675
0.86552309 -0.08276563 0.13443567 5.506434 ]
[-9. -8. -7. 7. 7. 1. 1. -1. 1. 6.]
###Markdown
30. How to find common values between two arrays? (★☆☆)
###Code
a = np.random.randint(0,10,10)
b = np.random.randint(0,10,10)
print(a)
print(b)
c = np.intersect1d(a,b)
print(c)
###Output
[5 2 0 6 4 2 3 7 4 1]
[6 7 7 0 3 9 0 2 5 6]
[0 2 3 5 6 7]
###Markdown
31. How to ignore all numpy warnings (not recommended)? (★☆☆) 32. Is the following expressions true? (★☆☆) ```pythonimport numpy as npnp.sqrt(-1) == np.emath.sqrt(-1)```
###Code
print(np.sqrt(-1) == np.emath.sqrt(-1))
###Output
False
###Markdown
33. How to get the dates of yesterday, today and tomorrow? (★☆☆)
###Code
yesterday = np.datetime64('today', 'D') - np.timedelta64(1, 'D')
today = np.datetime64('today', 'D')
tomorrow = np.datetime64('today', 'D') + np.timedelta64(1, 'D')
###Output
_____no_output_____
###Markdown
34. How to get all the dates corresponding to the month of July 2016? (★★☆)
###Code
a = np.arange('2016-07','2016-08',dtype='datetime64[D]')
print(a)
###Output
['2016-07-01' '2016-07-02' '2016-07-03' '2016-07-04' '2016-07-05'
'2016-07-06' '2016-07-07' '2016-07-08' '2016-07-09' '2016-07-10'
'2016-07-11' '2016-07-12' '2016-07-13' '2016-07-14' '2016-07-15'
'2016-07-16' '2016-07-17' '2016-07-18' '2016-07-19' '2016-07-20'
'2016-07-21' '2016-07-22' '2016-07-23' '2016-07-24' '2016-07-25'
'2016-07-26' '2016-07-27' '2016-07-28' '2016-07-29' '2016-07-30'
'2016-07-31']
###Markdown
35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆)
###Code
A = np.ones(3)*1
B = np.ones(3)*2
C = np.ones(3)*3
np.add(A,B,out=B)
np.divide(A,2,out=A)
np.negative(A,out=A)
np.multiply(A,B,out=A)
###Output
[3. 3. 3.]
###Markdown
36. Extract the integer part of a random array using 5 different methods (★★☆)
###Code
Z = np.random.uniform(0,10,10)
print(Z)
print (Z - Z%1)
print (np.floor(Z))
print (np.ceil(Z)-1)
print (Z.astype(int))
print (np.trunc(Z))
###Output
[5.12587063e-01 6.11417676e+00 4.52278136e+00 7.40439203e+00
1.97934146e+00 7.83186706e+00 9.08705098e+00 2.55233858e-03
5.16819291e+00 8.21004979e+00]
[0. 6. 4. 7. 1. 7. 9. 0. 5. 8.]
[0. 6. 4. 7. 1. 7. 9. 0. 5. 8.]
[0. 6. 4. 7. 1. 7. 9. 0. 5. 8.]
[0 6 4 7 1 7 9 0 5 8]
[0. 6. 4. 7. 1. 7. 9. 0. 5. 8.]
###Markdown
37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)
###Code
Z = np.zeros((5,5))
Z += np.arange(5)
print(Z)
###Output
[[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]
[0. 1. 2. 3. 4.]]
###Markdown
38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)
###Code
def generate():
for x in range(10):
yield x
z = np.fromiter(generate(),dtype = float)
print(z)
###Output
_____no_output_____
###Markdown
39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
###Code
a = np.linspace(0,1,11,endpoint=False)[1:]
print(a)
###Output
[0.09090909 0.18181818 0.27272727 0.36363636 0.45454545 0.54545455
0.63636364 0.72727273 0.81818182 0.90909091]
###Markdown
40. Create a random vector of size 10 and sort it (★★☆)
###Code
a = np.random.random(10)
print(a)
b = np.sort(a)
print(b)
###Output
[4.86757380e-01 1.62914068e-04 2.68263652e-01 3.81264516e-01
4.10301938e-01 6.98630253e-01 3.16398734e-01 6.16344498e-01
7.32390798e-02 2.48836384e-02]
[1.62914068e-04 2.48836384e-02 7.32390798e-02 2.68263652e-01
3.16398734e-01 3.81264516e-01 4.10301938e-01 4.86757380e-01
6.16344498e-01 6.98630253e-01]
###Markdown
41. How to sum a small array faster than np.sum? (★★☆)
###Code
a = np.random.randint(1,10,10)
print(a)
x = np.multiply.reduce(a)
print(x)
###Output
[6 8 9 8 7 1 3 4 8 6]
13934592
###Markdown
42. Consider two random array A and B, check if they are equal (★★☆) 43. Make an array immutable (read-only) (★★☆)
###Code
z = np.zeros(5)
z.flags.writeable = False
z[0] = 1
###Output
_____no_output_____
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
print(np.__version__)
np.show_config()
###Output
1.14.3
mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Apps/Anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\lib', 'C:/Apps/Anaconda3\\Library\\include']
blas_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Apps/Anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\lib', 'C:/Apps/Anaconda3\\Library\\include']
blas_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Apps/Anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\lib', 'C:/Apps/Anaconda3\\Library\\include']
lapack_mkl_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Apps/Anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\lib', 'C:/Apps/Anaconda3\\Library\\include']
lapack_opt_info:
libraries = ['mkl_rt']
library_dirs = ['C:/Apps/Anaconda3\\Library\\lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\include', 'C:\\Program Files (x86)\\IntelSWTools\\compilers_and_libraries_2016.4.246\\windows\\mkl\\lib', 'C:/Apps/Anaconda3\\Library\\include']
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
z=np.zeros(10)
z
###Output
_____no_output_____
###Markdown
4. How to find the memory size of any array (★☆☆) 5. How to get the documentation of the numpy add function from the command line? (★☆☆)
###Code
np.info(np.add)
#command line?
###Output
add(x1, x2, /, out=None, *, where=True, casting='same_kind', order='K', dtype=None, subok=True[, signature, extobj])
Add arguments element-wise.
Parameters
----------
x1, x2 : array_like
The arrays to be added. If ``x1.shape != x2.shape``, they must be
broadcastable to a common shape (which may be the shape of one or
the other).
out : ndarray, None, or tuple of ndarray and None, optional
A location into which the result is stored. If provided, it must have
a shape that the inputs broadcast to. If not provided or `None`,
a freshly-allocated array is returned. A tuple (possible only as a
keyword argument) must have length equal to the number of outputs.
where : array_like, optional
Values of True indicate to calculate the ufunc at that position, values
of False indicate to leave the value in the output alone.
**kwargs
For other keyword-only arguments, see the
:ref:`ufunc docs <ufuncs.kwargs>`.
Returns
-------
add : ndarray or scalar
The sum of `x1` and `x2`, element-wise. Returns a scalar if
both `x1` and `x2` are scalars.
Notes
-----
Equivalent to `x1` + `x2` in terms of array broadcasting.
Examples
--------
>>> np.add(1.0, 4.0)
5.0
>>> x1 = np.arange(9.0).reshape((3, 3))
>>> x2 = np.arange(3.0)
>>> np.add(x1, x2)
array([[ 0., 2., 4.],
[ 3., 5., 7.],
[ 6., 8., 10.]])
###Markdown
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
z=np.zeros(10)
z[4]=1
z
###Output
_____no_output_____
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
z=np.arange(10,50)
z
###Output
_____no_output_____
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
z=[1,2,4,5]
z[::-1]
###Output
_____no_output_____
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
# z=8*np.random.random((3,3)) #not this solution
z=np.arange(0,9).reshape((3,3))
z
###Output
_____no_output_____
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
a=np.nonzero([1,2,0,0,4,0])
a
###Output
_____no_output_____
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
z=np.eye(3,3)
z
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
z=np.random.random((3,3,3))
z
###Output
_____no_output_____
###Markdown
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
###Code
z=np.random.random((10,10))
zmin,zmax=z.min(),z.max()
zmin,zmax
###Output
_____no_output_____
###Markdown
14. Create a random vector of size 30 and find the mean value (★☆☆)
###Code
z=np.random.random(30)
print("the mean is ", z.mean())
###Output
the mean is 0.505523768366917
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
print(np.__version__)
np.show_config()
###Output
1.16.2
mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/home/hoshino/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/home/hoshino/anaconda3/include']
blas_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/home/hoshino/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/home/hoshino/anaconda3/include']
blas_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/home/hoshino/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/home/hoshino/anaconda3/include']
lapack_mkl_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/home/hoshino/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/home/hoshino/anaconda3/include']
lapack_opt_info:
libraries = ['mkl_rt', 'pthread']
library_dirs = ['/home/hoshino/anaconda3/lib']
define_macros = [('SCIPY_MKL_H', None), ('HAVE_CBLAS', None)]
include_dirs = ['/home/hoshino/anaconda3/include']
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
a = np.zeros((10))
###Output
_____no_output_____
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
a.nbytes
###Output
_____no_output_____
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆) 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆) 7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
np.arange(10, 50)
###Output
_____no_output_____
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
a = np.arange(10, 50)
a[::-1]
###Output
_____no_output_____
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
np.arange(9).reshape((3, 3))
###Output
_____no_output_____
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
np.nonzero([1, 2, 0, 0, 4, 0])
###Output
_____no_output_____
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
np.eye(3)
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
np.random.random((3,3,3))
###Output
_____no_output_____
###Markdown
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
###Code
a = np.random.random((10, 10))
print(np.max(a))
print(np.min(a))
###Output
0.9929558396429009
0.007995122448371528
###Markdown
14. Create a random vector of size 30 and find the mean value (★☆☆)
###Code
np.random.random((1000)).mean()
###Output
_____no_output_____
###Markdown
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
###Code
np.pad(np.zeros((3,4)), mode="constant", pad_width=1, constant_values=1)
###Output
_____no_output_____
###Markdown
16. How to add a border (filled with 0's) around an existing array? (★☆☆)
###Code
np.pad(a, mode="constant", pad_width=1, constant_values=0)
###Output
_____no_output_____
###Markdown
17. What is the result of the following expression? (★☆☆) ```python0 * np.nannp.nan == np.nannp.inf > np.nannp.nan - np.nannp.nan in set([np.nan])0.3 == 3 * 0.1```
###Code
0.3 == 3 * 0.1
###Output
_____no_output_____
###Markdown
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
###Code
np.diagflat(range(1,5), -1)
###Output
_____no_output_____
###Markdown
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆) 20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
###Code
array = np.array([
[11, 22, 33],
[77, 88, 99],
[44, 55, 66]]
)
np.argmax(array)
###Output
_____no_output_____
###Markdown
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
###Code
np.tile([[0, 1], [1, 0]], (4, 4))
###Output
_____no_output_____
###Markdown
22. Normalize a 5x5 random matrix (★☆☆)
###Code
a = np.random.random((5, 5))
a /= np.sum(a ** 2)
a
###Output
_____no_output_____
###Markdown
23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
###Code
dt = np.dtype(np.uint32)
###Output
_____no_output_____
###Markdown
24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
###Code
a = np.arange(15).reshape((5, 3))
b = np.arange(6).reshape((3, 2))
np.matmul(a, b)
###Output
_____no_output_____
###Markdown
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
###Code
a = np.arange(15)
a[np.logical_and(a>3,a<8)] = 0
a
###Output
_____no_output_____
###Markdown
26. What is the output of the following script? (★☆☆) ```python Author: Jake VanderPlasprint(sum(range(5),-1))from numpy import *print(sum(range(5),-1))```
###Code
from numpy import *
print(sum(range(5),-1))
###Output
10
###Markdown
27. Consider an integer vector Z, which of these expressions are legal? (★☆☆) ```pythonZ**Z2 > 2Z <- Z1j*ZZ/1/1ZZ```
###Code
Z = np.arange(3)
Z<Z>Z
###Output
_____no_output_____
###Markdown
28. What are the result of the following expressions? ```pythonnp.array(0) / np.array(0)np.array(0) // np.array(0)np.array([np.nan]).astype(int).astype(float)```
###Code
np.array([np.nan]).astype(int).astype(float)
###Output
_____no_output_____
###Markdown
29. How to round away from zero a float array ? (★☆☆) 30. How to find common values between two arrays? (★☆☆)
###Code
a = np.arange(15)
b = np.arange(3, 20)
np.intersect1d(a, b)
###Output
_____no_output_____
###Markdown
31. How to ignore all numpy warnings (not recommended)? (★☆☆)
###Code
np.seterr(all="ignore")
###Output
_____no_output_____
###Markdown
32. Is the following expressions true? (★☆☆) ```pythonnp.sqrt(-1) == np.emath.sqrt(-1)```
###Code
np.sqrt(-1) == np.emath.sqrt(-1)
###Output
_____no_output_____
###Markdown
33. How to get the dates of yesterday, today and tomorrow? (★☆☆)
###Code
np.datetime64("today", "D") + np.timedelta64(1, "D")
###Output
_____no_output_____
###Markdown
34. How to get all the dates corresponding to the month of July 2016? (★★☆)
###Code
np.arange("2016-06", "2016-07", dtype="datetime64[D]")
###Output
_____no_output_____
###Markdown
35. How to compute ((A+B)\*(-A/2)) in place (without copy)? (★★☆)
###Code
a = np.arange(3)
b = np.arange(3, 6)
(a.__iadd__(b))*(-a/2)
###Output
_____no_output_____
###Markdown
36. Extract the integer part of a random array using 5 different methods (★★☆) 37. Create a 5x5 matrix with row values ranging from 0 to 4 (★★☆)
###Code
np.tile(np.arange(5), (5, 1))
###Output
_____no_output_____
###Markdown
38. Consider a generator function that generates 10 integers and use it to build an array (★☆☆)
###Code
np.array(range(10))
###Output
_____no_output_____
###Markdown
39. Create a vector of size 10 with values ranging from 0 to 1, both excluded (★★☆)
###Code
np.linspace(0, 1, 10)
###Output
_____no_output_____
###Markdown
40. Create a random vector of size 10 and sort it (★★☆)
###Code
a = np.sort(np.random.random(10))
a
###Output
_____no_output_____
###Markdown
41. How to sum a small array faster than np.sum? (★★☆)
###Code
a = np.arange(10).reshape((2, 5))
%timeit np.sum(a)
def sumt(a):
t = 0
for i in a:
t += i
%timeit sumt(a)
###Output
2.94 µs ± 51.1 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
2.93 µs ± 99.7 ns per loop (mean ± std. dev. of 7 runs, 100000 loops each)
###Markdown
42. Consider two random array A and B, check if they are equal (★★☆)
###Code
a = np.random.random(4)
b = a
c = np.random.random(4)
np.array_equal(a, c)
###Output
_____no_output_____
###Markdown
43. Make an array immutable (read-only) (★★☆)
###Code
a.flags["WRITEABLE"] = False
a
a[0] = 1
###Output
_____no_output_____
###Markdown
44. Consider a random 10x2 matrix representing cartesian coordinates, convert them to polar coordinates (★★☆)
###Code
Z = np.random.random((10, 2))
r = np.sqrt(np.sum(Z ** 2, -1))
theta = np.arctan(Z[:, -1]/Z[:, 0])
theta
###Output
_____no_output_____
###Markdown
45. Create random vector of size 10 and replace the maximum value by 0 (★★☆)
###Code
Z = np.random.random(10)
print(Z)
idx = np.unravel_index(np.argmax(Z), Z.shape)
Z[idx] = 0
print(Z)
###Output
[0.91327621 0.44819014 0.90751922 0.14680165 0.85062559 0.39022343
0.61739459 0.7236832 0.37179418 0.65201644]
[0. 0.44819014 0.90751922 0.14680165 0.85062559 0.39022343
0.61739459 0.7236832 0.37179418 0.65201644]
###Markdown
46. Create a structured array with `x` and `y` coordinates covering the \[0,1\]x\[0,1\] area (★★☆)
###Code
dt = np.dtype([("x", "<f4"), ("y", "<f4")])
Z = np.array([(0, 0), (0, 1)], dtype=dt)
Z["x"]
###Output
_____no_output_____
###Markdown
47. Given two arrays, X and Y, construct the Cauchy matrix C (Cij =1/(xi - yj))
###Code
X = np.random.rand(4)
Y = np.random.rand(3)
C = 1. / X[:, None] - Y
C
###Output
_____no_output_____
###Markdown
48. Print the minimum and maximum representable value for each numpy scalar type (★★☆)
###Code
print(np.iinfo(np.int8))
print(np.iinfo(np.int16))
print(np.iinfo(np.int32))
print(np.iinfo(np.int64))
print(np.iinfo(np.uint8))
print(np.iinfo(np.uint16))
print(np.iinfo(np.uint32))
print(np.iinfo(np.uint64))
print(np.finfo(np.float32))
print(np.finfo(np.float64))
###Output
Machine parameters for int8
---------------------------------------------------------------
min = -128
max = 127
---------------------------------------------------------------
Machine parameters for int16
---------------------------------------------------------------
min = -32768
max = 32767
---------------------------------------------------------------
Machine parameters for int32
---------------------------------------------------------------
min = -2147483648
max = 2147483647
---------------------------------------------------------------
Machine parameters for int64
---------------------------------------------------------------
min = -9223372036854775808
max = 9223372036854775807
---------------------------------------------------------------
Machine parameters for uint8
---------------------------------------------------------------
min = 0
max = 255
---------------------------------------------------------------
Machine parameters for uint16
---------------------------------------------------------------
min = 0
max = 65535
---------------------------------------------------------------
Machine parameters for uint32
---------------------------------------------------------------
min = 0
max = 4294967295
---------------------------------------------------------------
Machine parameters for uint64
---------------------------------------------------------------
min = 0
max = 18446744073709551615
---------------------------------------------------------------
Machine parameters for float32
---------------------------------------------------------------
precision = 6 resolution = 1.0000000e-06
machep = -23 eps = 1.1920929e-07
negep = -24 epsneg = 5.9604645e-08
minexp = -126 tiny = 1.1754944e-38
maxexp = 128 max = 3.4028235e+38
nexp = 8 min = -max
---------------------------------------------------------------
Machine parameters for float64
---------------------------------------------------------------
precision = 15 resolution = 1.0000000000000001e-15
machep = -52 eps = 2.2204460492503131e-16
negep = -53 epsneg = 1.1102230246251565e-16
minexp = -1022 tiny = 2.2250738585072014e-308
maxexp = 1024 max = 1.7976931348623157e+308
nexp = 11 min = -max
---------------------------------------------------------------
###Markdown
49. How to print all the values of an array? (★★☆)
###Code
#np.set_printoptions(threshold=np.inf)
Z = np.random.random(1000)
print(Z)
###Output
[1.39798776e-01 6.69724983e-01 5.87358700e-01 2.66794860e-01
6.04634358e-02 3.75329453e-01 6.70318075e-01 7.76229210e-02
1.83058870e-01 8.73172322e-01 1.80373374e-01 4.23921159e-01
9.43834689e-01 7.77251243e-01 5.00110242e-01 4.48791174e-01
3.13091233e-02 1.61199559e-01 4.40544415e-01 4.87060138e-01
4.50351242e-01 3.21611617e-01 4.95937922e-01 3.49077017e-02
5.67107275e-01 5.36396414e-01 7.27594881e-01 9.17879516e-01
9.63430025e-01 7.26479698e-01 6.62872258e-01 5.36090160e-01
9.34553190e-01 6.72610348e-01 8.79814278e-01 9.61289553e-01
2.10129772e-02 3.85393048e-01 9.34358600e-01 3.06922477e-01
6.72947154e-02 6.42373908e-02 1.84203502e-01 5.08936306e-01
8.82285170e-02 5.74399909e-01 5.58803426e-01 1.14857617e-01
6.52764230e-01 6.04308189e-01 9.67814801e-02 4.32253793e-01
8.13910272e-01 3.69212546e-02 9.19624325e-01 2.07435193e-01
5.01711878e-01 8.49120969e-01 7.22187441e-01 7.41335990e-01
7.27409269e-01 8.63633553e-01 5.73524304e-01 5.31048337e-01
5.98397422e-01 6.71766393e-01 3.36740246e-01 1.95992272e-01
2.48015133e-01 9.84769648e-01 7.81753026e-01 9.39500516e-01
8.94109306e-01 4.77897782e-01 6.61738002e-01 1.08009764e-01
5.90993719e-01 2.40480792e-02 7.40586188e-01 2.67285652e-01
9.36841594e-01 6.75820249e-01 5.62267335e-01 3.98223065e-01
1.28808429e-01 4.42073657e-02 8.19574658e-01 4.40825716e-01
7.11703767e-01 1.80567001e-01 6.36135574e-01 9.30451350e-01
6.70098361e-01 7.12688851e-01 8.21900662e-01 3.62611780e-01
6.80702468e-01 1.94450291e-01 7.15504982e-01 3.34611512e-01
3.68050279e-02 6.75371549e-01 8.66489494e-01 3.12524010e-01
7.86751684e-01 1.15410472e-04 4.61646787e-01 2.09148507e-01
5.38922020e-01 6.27257879e-01 9.32836344e-01 1.12284610e-01
7.05728916e-01 4.26887953e-01 5.59745036e-02 9.00750284e-01
6.84084559e-02 4.62603167e-02 9.00557552e-01 1.27567576e-01
4.87011955e-01 7.95321982e-01 8.75783107e-01 7.86754692e-01
4.72103923e-01 4.12549331e-02 7.23862188e-01 9.62695811e-01
2.75889744e-01 3.69689971e-01 4.40711814e-01 5.83643490e-01
6.05827992e-01 5.35109892e-01 9.31348362e-01 2.34336609e-01
3.26416601e-01 7.82638465e-01 1.31707242e-01 3.40025110e-01
7.00959408e-01 9.53288618e-01 1.31739658e-01 7.74351657e-01
1.80363329e-01 3.03144150e-01 7.22542829e-01 3.81120000e-01
9.23990192e-01 2.67259864e-01 2.01471164e-01 4.54141659e-01
1.16327406e-01 4.98868265e-01 2.92459816e-01 8.26472744e-01
9.05264751e-01 5.72303087e-02 3.34269615e-01 6.84087723e-01
7.28876192e-01 3.78973885e-01 4.67632404e-01 1.43447524e-01
6.02015183e-01 3.61413115e-01 2.29436654e-01 7.56643310e-01
6.35197942e-01 5.55146634e-01 5.67710621e-01 3.65500906e-01
3.50763718e-01 9.48872461e-01 6.53368637e-01 9.42869809e-01
2.88583267e-01 1.02111926e-01 3.29204047e-01 2.45402087e-01
6.27130596e-01 8.75845352e-03 9.43420359e-01 2.55191936e-01
3.02016992e-01 4.12391240e-01 1.21790788e-01 1.65220226e-01
1.46263118e-01 5.50540277e-02 5.43061946e-01 5.96664547e-01
2.58801952e-01 4.44868485e-01 1.80778923e-02 3.30044341e-01
1.99372440e-01 5.42501352e-01 2.71107367e-01 1.86504870e-01
9.01396530e-01 8.67933102e-01 2.48562939e-01 9.70839456e-01
5.72806642e-01 3.62925273e-01 4.00893848e-01 1.03901531e-01
3.43146314e-01 1.18861634e-01 3.32104763e-01 1.24642860e-01
5.17685718e-01 6.14961120e-02 1.02144301e-01 3.40718178e-02
2.33521089e-02 3.92692293e-01 8.13420727e-01 8.88860838e-01
9.28088302e-01 5.38939260e-01 9.38706784e-02 4.47374893e-01
3.46534594e-01 6.05000754e-01 9.42292093e-01 1.07752722e-01
2.92533776e-01 7.50420836e-01 4.84228283e-01 3.47243673e-01
4.09030724e-01 1.34090209e-01 4.16469049e-01 6.08084306e-01
7.76602688e-01 3.12599166e-01 1.84550347e-01 6.17776049e-01
2.25780245e-01 9.87713221e-01 5.95941612e-02 7.30274870e-01
4.65258660e-01 7.33632137e-01 9.95277963e-02 5.48500717e-01
4.45337477e-01 3.25316843e-01 2.44867986e-01 8.53605188e-01
4.35034701e-01 8.80302168e-01 2.65683233e-01 3.67204117e-01
2.15067288e-01 8.56457885e-01 3.41523303e-01 9.71970155e-02
3.20448586e-01 3.45299985e-01 4.73079465e-01 5.22097467e-01
3.53561702e-01 8.11282724e-01 9.34096113e-01 5.15855662e-01
7.40479401e-01 4.14715766e-02 8.63236841e-01 2.37266715e-01
6.82094620e-02 1.17342200e-02 2.45898933e-01 2.75758407e-01
8.99066701e-01 5.06750721e-01 9.81319055e-01 9.48752875e-01
2.37200642e-02 1.75577649e-01 6.83192774e-01 9.94788589e-01
8.04657373e-01 2.76329113e-01 1.09386320e-01 5.97318839e-01
7.86233020e-01 9.07675322e-01 2.24524230e-02 7.87465364e-01
2.33859028e-01 6.63850356e-01 8.83819991e-01 5.60531849e-02
9.07478823e-01 8.39503811e-01 5.65017166e-01 1.20677130e-01
9.64570026e-01 2.61050357e-01 4.16735270e-01 4.83574271e-01
9.73836770e-01 3.14545088e-01 1.57581128e-01 2.86639296e-01
8.35486372e-01 1.26639139e-01 4.35322312e-01 2.24483073e-01
6.97528357e-01 9.97841450e-01 6.23134444e-01 1.83339747e-01
4.46200471e-01 7.61109143e-01 8.40990431e-01 3.18415768e-01
7.08558100e-01 3.10010137e-01 6.67209982e-01 9.93367981e-03
3.92459305e-01 8.85322276e-02 2.18540277e-01 8.36491260e-01
1.88072639e-01 2.17916419e-01 7.44095726e-02 6.56850385e-01
2.33447946e-02 8.79243602e-01 2.63328363e-01 8.86720877e-03
3.92206669e-01 7.05624422e-01 9.00836939e-01 4.38102600e-01
7.14304822e-01 6.59138559e-01 3.26777842e-01 4.99191774e-01
3.16507817e-01 4.95866911e-04 9.91978057e-01 6.76700091e-01
1.89512942e-01 4.93952256e-01 4.17602055e-01 2.15769670e-01
8.32281636e-01 6.99840701e-01 6.73033021e-02 9.43622094e-01
5.04491593e-01 6.72647586e-01 8.84666011e-01 2.88365148e-01
8.67846698e-01 9.03434404e-01 3.53173548e-01 4.45378010e-01
6.92038514e-01 3.41706515e-01 6.11067095e-01 4.84851945e-01
5.23330471e-01 6.99429008e-01 9.88320628e-01 8.09581589e-01
2.96906156e-01 6.00095109e-01 7.98710920e-01 4.67548550e-01
6.90539969e-03 7.67708231e-01 7.81894057e-01 4.43034799e-01
9.15470135e-01 1.25146351e-01 1.17995772e-01 2.49421860e-01
6.88215265e-01 4.85354875e-01 4.15550702e-01 5.09572952e-01
9.51782840e-03 7.73530078e-01 2.01471065e-01 9.68234822e-01
8.18285996e-01 5.14393612e-01 5.49242885e-02 4.73918961e-01
2.20646667e-01 4.21393032e-01 9.39975558e-01 9.25620877e-01
4.24034409e-01 7.60621243e-02 9.87547405e-01 1.25075932e-01
3.31798625e-01 3.66753133e-04 3.82161370e-01 2.80072112e-01
2.43313272e-01 7.36066262e-02 3.59893232e-01 1.95848275e-01
5.94990538e-03 2.73075941e-01 1.99056134e-02 8.95567948e-01
1.06019467e-01 7.11527879e-01 4.53989590e-01 2.60931715e-02
1.06490326e-01 8.31891911e-01 3.98072957e-01 9.28885675e-01
6.64317965e-01 3.71841659e-01 3.09103631e-01 8.67689362e-01
6.17893485e-01 8.33255092e-02 3.62875021e-01 6.95967948e-01
1.45530274e-01 2.71234029e-01 5.60197679e-02 3.88976388e-01
9.79223642e-01 7.27015176e-01 2.61346679e-01 9.75087657e-01
3.20428826e-01 4.98011651e-01 8.22524477e-02 6.80190825e-01
2.57644199e-01 5.84798363e-01 3.85016877e-01 4.69239103e-01
9.40329240e-01 9.84778128e-01 4.38662168e-01 1.10539383e-01
8.04587992e-01 2.84042259e-01 3.62072110e-01 5.56439396e-01
5.69972172e-01 3.41231362e-01 2.17868057e-01 2.47934739e-02
8.79636809e-01 4.15294011e-01 3.73684627e-01 7.37512347e-02
8.75045149e-01 6.98688754e-01 3.50107300e-01 9.09917366e-01
3.51303200e-01 6.80963907e-01 7.52899082e-01 8.88805242e-01
9.94620341e-02 4.77142306e-01 7.53373813e-01 2.92631977e-01
4.59002400e-01 4.63930880e-01 1.88157303e-01 6.71714785e-01
4.94533793e-01 6.13710773e-02 8.96211620e-03 2.26310269e-01
1.13569000e-01 5.94550743e-01 5.56937748e-01 2.99781088e-01
7.11286305e-02 5.19658076e-01 1.72832938e-01 9.52116486e-01
1.38050059e-01 3.04178890e-01 4.02724620e-01 1.87248348e-01
2.83402989e-01 6.47150517e-01 4.53341427e-01 7.10629570e-01
7.37916858e-01 6.83029474e-01 8.07269426e-01 9.46802242e-01
5.35443516e-01 2.62183484e-01 8.90975619e-01 9.07476844e-01
9.40530491e-01 2.06046161e-01 9.70273032e-01 2.09838431e-01
8.89378329e-01 6.84841010e-01 8.26584013e-02 2.46870604e-01
2.59182520e-01 8.20893527e-01 7.19470222e-01 3.22677974e-01
1.69547006e-01 8.49166635e-01 3.54386298e-01 9.44638224e-01
9.33974127e-01 1.44759402e-01 3.57191748e-01 5.05637740e-01
5.95404947e-01 1.95250135e-03 3.05866094e-01 6.29280270e-02
2.94933116e-01 7.01994912e-01 6.04086605e-01 4.52956826e-01
6.42724190e-01 7.81655011e-01 5.99806166e-02 8.36908692e-01
9.41844283e-01 9.57493244e-02 6.40628878e-01 8.85242011e-01
8.82956765e-01 9.70435020e-01 3.58574333e-01 8.94623455e-01
2.92449645e-01 9.59847955e-01 2.29424404e-01 2.19633471e-03
9.95806083e-01 5.94079277e-01 1.79095757e-01 8.20900535e-01
2.41841102e-01 6.49861416e-01 2.98995532e-01 3.60784705e-01
3.04009893e-01 2.75082981e-01 6.46633122e-01 1.79186104e-01
1.13631592e-02 8.15516565e-01 6.19450839e-01 1.60186220e-01
7.42691171e-02 3.45032304e-01 5.41795030e-01 8.61502835e-01
7.92649412e-01 5.92983524e-01 1.68717971e-01 9.09830544e-01
1.06931639e-01 7.88394010e-01 3.33891559e-01 6.23547763e-01
8.04810638e-01 2.08182287e-01 3.08031360e-01 1.17157353e-01
9.17948903e-01 8.14708575e-01 8.87971231e-01 8.08852272e-01
5.64905544e-01 8.39395465e-01 1.55018492e-01 6.49197356e-01
9.93063936e-01 1.67896546e-01 6.79862383e-01 2.05988491e-01
2.88149657e-01 1.10289454e-02 3.13203832e-01 9.65865222e-01
3.13222888e-01 4.39680103e-01 9.77762007e-01 8.67114078e-02
4.93073295e-01 1.43843146e-01 7.30431777e-01 1.07007073e-01
6.26894994e-01 8.31626889e-01 2.42903870e-02 8.54733271e-01
2.53516317e-01 1.85100696e-01 7.06732835e-01 7.11623974e-01
6.94295529e-01 2.29929211e-01 3.44808767e-01 9.04394888e-02
2.52763336e-01 4.50830590e-01 2.92582736e-01 8.16322463e-01
8.20057348e-01 2.47827021e-01 6.03962061e-01 5.88860497e-01
9.23626930e-01 3.03675659e-01 7.70195553e-01 9.52382694e-01
5.87771752e-01 4.45830391e-01 3.17540284e-01 8.17618862e-01
6.57363499e-01 3.76941622e-01 6.92177944e-01 5.83128559e-01
6.42367220e-01 1.55513001e-01 7.72786501e-01 7.63716798e-01
8.83272375e-03 6.23801379e-01 5.52654625e-01 3.57952987e-01
4.02204899e-01 7.66511782e-01 6.72682221e-01 8.62033293e-01
7.15310506e-01 8.43389616e-01 7.67355758e-01 6.53404109e-01
3.87868846e-01 2.15351941e-01 9.53490508e-01 3.68761602e-02
3.79766367e-01 4.25802720e-02 4.99554003e-02 8.56643128e-01
3.26906408e-02 6.16917436e-01 6.59591143e-01 2.09695339e-01
1.62572375e-01 9.84637164e-01 6.30263193e-01 7.77449355e-01
9.11768632e-01 1.71113498e-01 3.94650630e-01 2.93957288e-01
3.62288593e-01 3.24560791e-01 5.90521396e-01 5.73966763e-02
2.80079955e-02 4.00845327e-01 9.22543654e-01 3.17757967e-01
3.82762623e-01 9.20839232e-01 4.74123015e-01 5.13833613e-01
6.45393060e-01 7.76714490e-01 6.68496136e-01 4.84869499e-01
1.30379505e-01 1.91173874e-01 8.08701474e-01 6.40682129e-01
7.80757917e-01 5.96142168e-01 2.58388176e-01 9.45757936e-02
4.95689165e-01 2.15082294e-01 6.39015157e-01 6.01686522e-01
8.50104213e-01 9.29555398e-02 9.45731753e-01 3.28489809e-01
5.26011516e-01 1.93060499e-01 5.69546391e-02 2.18064648e-01
2.67528407e-01 4.02602600e-01 4.72662784e-01 8.99097119e-01
7.23696984e-01 5.82609352e-01 4.60430386e-01 5.50632449e-01
4.38724168e-01 9.35375856e-01 7.41312616e-01 3.80745402e-01
2.40863964e-01 8.98174902e-03 3.44177735e-01 6.46272702e-01
2.27382967e-01 5.26128248e-02 2.84300530e-01 9.44884255e-01
1.32043439e-01 2.00779828e-01 7.99096989e-01 9.47718562e-01
3.27149864e-01 8.36681604e-01 7.28265796e-02 3.46999649e-01
6.19070033e-01 1.48741480e-01 3.09740446e-01 4.80768252e-01
2.17944014e-01 5.12827226e-01 2.15273679e-01 8.11321005e-01
8.74031982e-01 8.63799928e-01 2.36078559e-02 7.68623579e-01
8.73959743e-01 4.99570576e-02 1.60581032e-02 4.10608748e-02
3.00530156e-01 5.28864050e-01 5.56934017e-01 8.62405608e-01
4.48981352e-01 8.87211110e-01 1.48907417e-01 7.55681182e-01
1.89683320e-02 6.97669379e-01 2.65395828e-01 4.60564843e-01
3.18674514e-01 6.67117605e-01 1.47980767e-01 1.10591477e-01
8.09264567e-02 8.40443863e-01 2.19411074e-01 9.59997512e-01
4.72439832e-01 7.70550924e-01 3.87723184e-01 1.99719341e-01
3.00091234e-01 2.02236977e-01 6.11071372e-02 5.53081668e-01
6.79093922e-01 3.21431704e-02 8.55364976e-03 2.36326867e-01
4.03029148e-01 4.44630077e-01 3.05952795e-01 2.69899810e-01
4.97986041e-01 5.36954609e-02 7.70928847e-01 3.52527433e-01
8.52969069e-02 6.32214722e-01 8.54925085e-01 4.47747266e-01
2.31844177e-01 9.84688351e-01 8.40070076e-01 2.25638949e-01
8.41461695e-01 4.69662339e-01 5.04825684e-01 9.16345396e-02
5.41119981e-01 9.96577679e-01 2.84880717e-01 1.44348558e-01
4.02154876e-01 7.10217902e-01 7.73588240e-01 9.53377569e-01
4.41341734e-01 8.57805147e-01 1.96091764e-01 6.18704712e-01
7.25061748e-01 3.09166031e-01 1.43673384e-01 3.69984206e-01
8.13351356e-01 7.23735346e-01 7.24626037e-01 6.41705624e-01
5.11309716e-01 5.90444995e-01 9.31786344e-01 2.05747293e-01
2.12369249e-01 2.27830428e-01 1.57289688e-01 1.68156914e-01
8.77642020e-01 1.49191401e-01 3.74615953e-01 8.69996542e-01
6.54420984e-01 5.02293036e-02 5.49872476e-01 4.18765621e-01
3.41992308e-01 9.05226968e-01 2.40399920e-01 7.36896329e-01
9.92505269e-01 8.26233839e-01 3.29926092e-01 3.37440474e-01
7.60388350e-01 7.35838500e-02 4.10332385e-01 9.50947840e-01
9.08694913e-01 8.70753001e-01 3.34520417e-01 9.50980806e-01
8.93628768e-01 6.42453835e-01 3.94584113e-01 5.26992579e-02
7.57528796e-01 9.28774759e-01 7.25146985e-01 8.43347762e-01
9.36566109e-01 4.48659672e-01 6.16688619e-01 5.38778189e-01
7.29019223e-02 7.06927607e-01 7.03058001e-01 4.99885325e-01
5.69486065e-01 3.11914610e-01 3.91999671e-01 5.87126537e-01
2.40213529e-01 8.77841852e-01 3.48928683e-01 3.76494832e-01
1.32915361e-01 6.12064815e-01 9.59053533e-01 1.07483156e-01
1.39627117e-01 4.77017592e-01 9.23433327e-01 3.36806873e-01
6.63306824e-01 7.39204115e-01 7.28668392e-01 4.62048809e-02
6.60116332e-01 9.18849738e-01 2.60827787e-01 8.01729210e-01
2.37942544e-01 3.44875297e-01 5.01165772e-01 2.65628862e-01
8.45583751e-01 1.43420744e-01 8.21609451e-01 8.20318804e-01
7.41231882e-01 8.73428471e-02 1.82147715e-01 4.94778620e-01
5.88704025e-01 5.81879742e-01 4.38662275e-01 8.41879103e-01
1.06354471e-03 1.97177264e-02 2.77828866e-01 1.41887381e-01
5.93861473e-01 4.93664509e-01 5.49168556e-01 3.66628921e-01
2.54599419e-01 6.55530563e-01 2.53261218e-01 4.32020983e-01
1.25495258e-01 7.09654045e-01 2.94048926e-01 3.12039528e-01
7.11793748e-01 1.27671608e-01 7.09159784e-01 4.21990192e-01
1.90220147e-01 8.75004491e-01 4.19785850e-02 6.69354348e-01
6.17676786e-01 2.38516230e-02 7.57968595e-02 5.37558390e-01
6.51581072e-02 2.12441742e-01 2.51103188e-01 3.91662327e-01
5.24911159e-01 2.45255184e-02 7.45611329e-01 5.64560929e-01
3.66286168e-01 6.67413254e-01 5.12839659e-01 7.01317388e-01
1.93647031e-01 5.56252811e-01 1.30863442e-01 5.81829172e-01
3.79706640e-01 9.90436934e-01 5.34618234e-01 2.15715510e-01
9.27181404e-01 8.33637978e-01 9.57815331e-02 7.70590412e-01
6.56833876e-01 1.46120847e-01 4.20660625e-03 4.84929188e-01
4.91781402e-01 4.30509047e-01 1.57255799e-01 9.32645434e-01
7.46849751e-01 8.07727497e-02 5.48875731e-01 8.80498329e-01
8.51027232e-01 5.36880873e-01 2.62545694e-01 1.22278138e-01
4.33167037e-01 3.70071154e-01 8.98491609e-01 9.70693002e-01
1.03419712e-01 6.31141791e-01 3.08104718e-01 1.79592826e-02
3.97669652e-01 8.03186954e-02 9.67343561e-01 5.03902513e-01
6.03745528e-01 4.54147659e-01 3.91223797e-01 8.38283952e-01]
###Markdown
50. How to find the closest value (to a given scalar) in a vector? (★★☆)
###Code
Z = np.random.random((100, 100))
v = 0.61
delta = np.unravel_index(np.argmin(np.abs(Z - v)), Z.shape)
Z[delta]
###Output
_____no_output_____
###Markdown
51. Create a structured array representing a position (x,y) and a color (r,g,b) (★★☆)
###Code
dt_p = np.dtype([("x", "<f4"), ("y", "<f4")])
dt_c = np.dtype([("r", "u1"), ("g", "u1"), ("b", "u1")])
dt = np.dtype([("position", dt_p), ("color", dt_c)])
###Output
_____no_output_____
###Markdown
52. Consider a random vector with shape (100,2) representing coordinates, find point by point distances (★★☆) 53. How to convert a float (32 bits) array into an integer (32 bits) in place? 54. How to read the following file? (★★☆) ```1, 2, 3, 4, 56, , , 7, 8 , , 9,10,11``` 55. What is the equivalent of enumerate for numpy arrays? (★★☆) 56. Generate a generic 2D Gaussian-like array (★★☆) 57. How to randomly place p elements in a 2D array? (★★☆) 58. Subtract the mean of each row of a matrix (★★☆)
###Code
Z = np.random.random((10, 10))
Z -= np.mean(Z, -1)
Z
###Output
_____no_output_____
###Markdown
59. How to sort an array by the nth column? (★★☆) 60. How to tell if a given 2D array has null columns? (★★☆)
###Code
Z = np.random.random((10, 10))
np.any(np.all(Z, axis = 0) == 0)
###Output
_____no_output_____
###Markdown
61. Find the nearest value from a given value in an array (★★☆) 62. Considering two arrays with shape (1,3) and (3,1), how to compute their sum using an iterator? (★★☆) 63. Create an array class that has a name attribute (★★☆) 64. Consider a given vector, how to add 1 to each element indexed by a second vector (be careful with repeated indices)? (★★★) 65. How to accumulate elements of a vector (X) to an array (F) based on an index list (I)? (★★★) 66. Considering a (w,h,3) image of (dtype=ubyte), compute the number of unique colors (★★★) 67. Considering a four dimensions array, how to get sum over the last two axis at once? (★★★)
###Code
Z = np.random.random((4,3,2,5))
np.add.reduce(Z, axis=(-2, -1)).shape
###Output
_____no_output_____
###Markdown
68. Considering a one-dimensional vector D, how to compute means of subsets of D using a vector S of same size describing subset indices? (★★★)
###Code
Z = np.random.random(300)
i = np.arange(3, 39)
np.mean(Z[i])
###Output
_____no_output_____
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆) 3. Create a null vector of size 10 (★☆☆)
###Code
np.zeros(10)
###Output
_____no_output_____
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
np.zeros(10).nbytes
###Output
_____no_output_____
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆) 6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
z=np.zeros(10)
z[4]=1
z
###Output
_____no_output_____
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
np.arange(10,50)
###Output
_____no_output_____
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
z=np.arange(10,20)
z[::-1]
###Output
_____no_output_____
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
np.arange(9).reshape(3,3)
###Output
_____no_output_____
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
z=np.array([1,2,0,0,4,0] )
indices=np.where(z==0)
indices
###Output
_____no_output_____
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
np.identity(3)
np.diag([1,1,1])
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
np.random.rand(3,3,3)
###Output
_____no_output_____
###Markdown
13. Create a 10x10 array with random values and find the minimum and maximum values (★☆☆)
###Code
z=np.random.rand(10,10)
print(z)
print(z.max(),' ',z.min())
print(z[1].max())
###Output
[[0.3712614 0.95575763 0.14819107 0.80108973 0.84569513 0.97937431
0.20215369 0.32437204 0.24303929 0.93958533]
[0.41620807 0.64365167 0.4745426 0.36965147 0.17566429 0.51205804
0.0689625 0.00563769 0.50399285 0.06000029]
[0.83960913 0.66418128 0.81703298 0.51205313 0.0044982 0.47765778
0.4818053 0.40927636 0.8423076 0.27300263]
[0.05629205 0.62722912 0.32032474 0.28343814 0.92324815 0.60811748
0.34493346 0.61469725 0.35997121 0.17645424]
[0.97370631 0.47480215 0.21391344 0.24770859 0.0925012 0.92787275
0.66451203 0.0277279 0.07429704 0.64728567]
[0.56666416 0.25349763 0.13890828 0.05992409 0.12811743 0.54884214
0.18408104 0.22798072 0.01613166 0.21992531]
[0.61128885 0.29722141 0.67929737 0.08907281 0.39758464 0.68306982
0.01569296 0.82110325 0.23332777 0.22853228]
[0.83128901 0.43259131 0.91195003 0.57236809 0.51831628 0.62866361
0.11977837 0.01106123 0.38523137 0.71357097]
[0.55594464 0.20224167 0.17916492 0.56852117 0.56548548 0.56195239
0.3333777 0.37051841 0.13042125 0.85249613]
[0.5780547 0.83319234 0.43591852 0.4606673 0.83839561 0.93615025
0.17969131 0.15870748 0.90415246 0.44631917]]
0.9793743135139271 0.004498203083211583
0.6436516688904135
###Markdown
14. Create a random vector of size 30 and find the mean value (★☆☆)
###Code
z=np.random.rand(30)
print(z)
z.mean()
###Output
[0.92844801 0.27958059 0.12411883 0.68815552 0.90129838 0.70983942
0.01780177 0.79730927 0.42142924 0.74747304 0.68599151 0.40431038
0.2876902 0.21809605 0.1089472 0.14849823 0.08787999 0.78273632
0.42492549 0.21615617 0.47416501 0.75139107 0.2413429 0.16421622
0.69034998 0.73454215 0.13975545 0.6540834 0.30584496 0.35218465]
###Markdown
15. Create a 2d array with 1 on the border and 0 inside (★☆☆)
###Code
z=np.ones(25).reshape(5,5)
z[1:-1,1:-1]=0
z
###Output
_____no_output_____
###Markdown
16. How to add a border (filled with 0's) around an existing array? (★☆☆)
###Code
z=np.ones((5,5))
np.pad(z,pad_width=1,mode='constant',constant_values=0)
###Output
_____no_output_____
###Markdown
17. What is the result of the following expression? (★☆☆) ```python0 * np.nannp.nan == np.nannp.inf > np.nannp.nan - np.nannp.nan in set([np.nan])0.3 == 3 * 0.1```
###Code
print(0*np.nan)
print(np.nan==np.nan)
print(np.inf>np.nan)
print(np.nan-np.nan)
print(np.nan in set([np.nan]))
print(0.3 == 3*0.1)
print(np.isnan(np.nan))
print(np.isclose(0.3,3*0.1))
###Output
nan
False
False
nan
True
False
True
True
###Markdown
18. Create a 5x5 matrix with values 1,2,3,4 just below the diagonal (★☆☆)
###Code
z=np.diag(1+np.arange(4),k=-1)
z
z=np.zeros((5,5))
i=np.arange(4)
z[i+1,i]=i+1
z
###Output
_____no_output_____
###Markdown
19. Create a 8x8 matrix and fill it with a checkerboard pattern (★☆☆)
###Code
z=np.zeros((8,8))
z[::2,1::2]=1
z[1::2,::2]=1
z
###Output
_____no_output_____
###Markdown
20. Consider a (6,7,8) shape array, what is the index (x,y,z) of the 100th element?
###Code
np.arange(6*7*8).reshape(6,7,8).flatten()[99]
np.unravel_index(100,(6,7,8))
###Output
_____no_output_____
###Markdown
21. Create a checkerboard 8x8 matrix using the tile function (★☆☆)
###Code
np.tile([[1,0],[0,1]],(4,4))
###Output
_____no_output_____
###Markdown
22. Normalize a 5x5 random matrix (★☆☆)
###Code
z=np.random.rand(5,5)
z=(z-np.mean(z))/np.std(z)
z
###Output
_____no_output_____
###Markdown
23. Create a custom dtype that describes a color as four unsigned bytes (RGBA) (★☆☆)
###Code
color=np.dtype([('r',np.ubyte,1),
('g',np.ubyte,1),
('b',np.ubyte,1),
('a',np.ubyte,1)])
black=np.array([(0,0,0,0),color])
###Output
_____no_output_____
###Markdown
24. Multiply a 5x3 matrix by a 3x2 matrix (real matrix product) (★☆☆)
###Code
a=np.random.rand(5,3)
b=np.random.rand(3,2)
np.dot(a,b)
###Output
_____no_output_____
###Markdown
25. Given a 1D array, negate all elements which are between 3 and 8, in place. (★☆☆)
###Code
a=np.arange(1,15)
a[np.where((a>=3) & (a<=8))] *= -1
a
a=np.arange(1,15)
a[(a>=3)&(a<=8)] *=-1
a
###Output
_____no_output_____
###Markdown
26. What is the output of the following script? (★☆☆) ```python Author: Jake VanderPlasprint(sum(range(5),-1))from numpy import *print(sum(range(5),-1))```
###Code
print(sum(range(5),-1))
print(np.sum(range(5),-1))
###Output
9
10
###Markdown
100 numpy exercisesThis is a collection of exercises that have been collected in the numpy mailing list, on stack overflow and in the numpy documentation. The goal of this collection is to offer a quick reference for both old and new users but also to provide a set of exercises for those who teach.If you find an error or think you've a better way to solve some of them, feel free to open an issue at 1. Import the numpy package under the name `np` (★☆☆)
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
2. Print the numpy version and the configuration (★☆☆)
###Code
np.version.version
###Output
_____no_output_____
###Markdown
3. Create a null vector of size 10 (★☆☆)
###Code
v = np.zeros(10)
v
###Output
_____no_output_____
###Markdown
4. How to find the memory size of any array (★☆☆)
###Code
s = v.size
s
###Output
_____no_output_____
###Markdown
5. How to get the documentation of the numpy add function from the command line? (★☆☆)
###Code
%run `python -c "import numpy; numpy.info(numpy.add)"`
###Output
ERROR:root:File `'`python.py'` not found.
###Markdown
6. Create a null vector of size 10 but the fifth value which is 1 (★☆☆)
###Code
nulo = np.zeros(10)
nulo[4] = 1
nulo
###Output
_____no_output_____
###Markdown
7. Create a vector with values ranging from 10 to 49 (★☆☆)
###Code
seven = np.array(range(10, 50))
seven
###Output
_____no_output_____
###Markdown
8. Reverse a vector (first element becomes last) (★☆☆)
###Code
seven[::-1]
###Output
_____no_output_____
###Markdown
9. Create a 3x3 matrix with values ranging from 0 to 8 (★☆☆)
###Code
nine = np.arange(0,9).reshape(3,3)
nine
###Output
_____no_output_____
###Markdown
10. Find indices of non-zero elements from \[1,2,0,0,4,0\] (★☆☆)
###Code
nz = np.nonzero([1,2,0,0,4,0])
nz
###Output
_____no_output_____
###Markdown
11. Create a 3x3 identity matrix (★☆☆)
###Code
m = np.identity(3)
m
###Output
_____no_output_____
###Markdown
12. Create a 3x3x3 array with random values (★☆☆)
###Code
m = np.random.rand(3,3,3)
m
###Output
_____no_output_____ |
Data_Preparation.ipynb | ###Markdown
Import Dependancies
###Code
# Initial imports
import pandas as pd
from pathlib import Path
from sklearn.preprocessing import StandardScaler, MinMaxScaler
from sklearn.manifold import TSNE
from sklearn.decomposition import PCA
from sklearn.cluster import KMeans
import matplotlib.pyplot as plt
train_df = pd.read_csv('crypto_data.csv', index_col= 0)
train_df
###Output
_____no_output_____
###Markdown
Preprocess Data
###Code
# Discard all cryptocurrencies that are not being traded. In other words, filter for
#currencies that are currently being traded
train_df = train_df.loc[(train_df["IsTrading"] == True)]
train_df
#drop the IsTrading column from the dataframe.
train_df = train_df.drop(["IsTrading"], axis='columns')
train_df.head(10)
train_df.shape
# Find null values
for column in train_df.columns:
print(f"Column {column} has {train_df[column].isnull().sum()} null values")
# Remove all rows that have at least one null value.
train_df = train_df.dropna(axis=0, how="any")
train_df.shape
train_df
# Filter for cryptocurrencies that have been mined. Mined should be greater than zero.
train_df = train_df.loc[train_df["TotalCoinsMined"] > 0]
train_df.shape
# Delete the CoinName from the original dataframe.
train_df = train_df.drop(["CoinName"], axis='columns')
train_df.shape
train_df.head(10)
train_df['Algorithm'].unique()
train_df['ProofType'].unique()
train_df['TotalCoinsMined'].unique()
train_df['TotalCoinSupply'].unique()
train_df.shape
train_df.head
# convert the remaining features with text values, Algorithm and ProofType, into numerical data.
#To accomplish this task, use Pandas to create dummy variables.
X = pd.get_dummies(data=train_df, columns=['Algorithm', 'ProofType'])
print(X.shape)
X.head()
#Standardize your dataset
scaler = StandardScaler()
X_scaled = scaler.fit_transform(X)
X_scaled[0]
X_scaled.shape
#Perform dimensionality reduction with PCA. preserve 90% of the explained variance in dimensionality reduction
# How did the number of the features change?
# Initialize PCA model
pca = PCA(n_components=.90)
# Get principal components for the data.
crypto_pca = pca.fit_transform(X_scaled)
# Fetch the explained variance
pca.explained_variance_.sum()
crypto_pca.shape
# reduce the dataset dimensions with t-SNE and visually inspect the results
# run t-SNE on the principal components: the output of the PCA transformation
tsne = TSNE(perplexity=50)
tsne_features = tsne.fit_transform(crypto_pca)
tsne_features.shape
# create a scatter plot of the t-SNE output. Observe whether there are distinct clusters or not.
x= tsne_features[:,0]
y= tsne_features[:,1]
plt.scatter(x,y)
plt.xlim([-70, 20])
plt.ylim([-20, 50])
plt.show()
# Create an elbow plot to identify the best number of clusters.
# Use a for-loop to determine the inertia for each k between 1 through 10.
inertia = []
k = list(range(1, 11))
# Calculate the inertia for the range of k values
for i in k:
km = KMeans(n_clusters=i, random_state=0)
km.fit(crypto_pca)
inertia.append(km.inertia_)
# Create the Elbow Curve
elbow_df = pd.DataFrame({"k": k, "inertia": inertia})
elbow_df.plot.line(x="k", y="inertia")
#if possible, where the elbow of the plot is, and at which value of k it appears.
plt.xlabel('Number of clusters')
plt.ylabel('Inertia')
plt.title('Elbow curve for crypto data')
plt.show()
###Output
_____no_output_____
###Markdown
Imports
###Code
import numpy as np
import pandas as pd
import datetime as dt
import os #scanning folders
###Output
_____no_output_____
###Markdown
Load Bidding history Define data load function
###Code
def load_bidding_history(directory):
bidding_history = pd.DataFrame(columns = ['Itemnumber','Title','Ending Time', 'Timestamp', 'Bidder', 'feedback_score', 'Bid Amount'])
for file in os.scandir(directory):
bidding_history = bidding_history.append(pd.read_csv(file, usecols=['Itemnumber','Title','Ending Time', 'Timestamp', 'Bidder', 'feedback_score', 'Bid Amount'], parse_dates=['Ending Time', 'Timestamp']), ignore_index=True)
return bidding_history
###Output
_____no_output_____
###Markdown
Load data
###Code
df_bids_antiques = load_bidding_history('biddingdata/antiques')
###Output
_____no_output_____
###Markdown
Authors note: eBay only grants public access to auctions within the last several weeks. To increase the number of auctions in the computer category to a suitable scope, I conducted to runs which have to be merged together at this point.
###Code
df_bids_computers = load_bidding_history('biddingdata/computers')
df_bids_computers = df_bids_computers.append(load_bidding_history('biddingdata/computers2'))
###Output
_____no_output_____
###Markdown
Collect Metadata Number of auctions
###Code
df_bids_antiques['Itemnumber'].nunique()
df_bids_computers['Itemnumber'].nunique()
###Output
_____no_output_____
###Markdown
Number of Biddings
###Code
len(df_bids_antiques)
len(df_bids_computers)
###Output
_____no_output_____
###Markdown
Optimize the data structure Exclude items with less than 2 bidders
###Code
relevant_itemnumbers = df_bids_antiques.loc[:,['Itemnumber', 'Bidder']].groupby(by=["Itemnumber"]).nunique()
relevant_itemnumbers = relevant_itemnumbers.loc[relevant_itemnumbers['Bidder'] > 1]
relevant_itemnumbers = relevant_itemnumbers.index.tolist()
relevant_itemnumbers
df_bids_antiques = df_bids_antiques[df_bids_antiques['Itemnumber'].isin(relevant_itemnumbers)]
relevant_itemnumbers = df_bids_computers.loc[:,['Itemnumber', 'Bidder']].groupby(by=["Itemnumber"]).nunique()
relevant_itemnumbers = relevant_itemnumbers.loc[relevant_itemnumbers['Bidder'] > 1]
relevant_itemnumbers = relevant_itemnumbers.index.tolist()
relevant_itemnumbers
df_bids_computers = df_bids_computers[df_bids_computers['Itemnumber'].isin(relevant_itemnumbers)]
###Output
_____no_output_____
###Markdown
Update metadata after removing auctions with only 1 Bidding
###Code
df_bids_antiques['Itemnumber'].nunique()
df_bids_computers['Itemnumber'].nunique()
###Output
_____no_output_____
###Markdown
Remove timezones from data
###Code
df_bids_computers['Ending Time'] = df_bids_computers['Ending Time'].apply(lambda x: x.replace(tzinfo=None))
df_bids_computers['Timestamp'] = df_bids_computers['Timestamp'].apply(lambda x: x.replace(tzinfo=None))
df_bids_computers2['Ending Time'] = df_bids_computers2['Ending Time'].apply(lambda x: x.replace(tzinfo=None))
df_bids_computers2['Timestamp'] = df_bids_computers2['Timestamp'].apply(lambda x: x.replace(tzinfo=None))
###Output
_____no_output_____
###Markdown
Create new column containing the remaining time when the bid was submitted
###Code
df_bids_antiques['Time Left'] = df_bids_antiques['Ending Time'] - df_bids_antiques['Timestamp']
df_bids_computers['Time Left'] = df_bids_computers['Ending Time'] - df_bids_computers['Timestamp']
###Output
_____no_output_____
###Markdown
Remove columns not needed anymore
###Code
df_bids_antiques = df_bids_antiques.drop(columns=['Ending Time', 'Timestamp'])
df_bids_computers = df_bids_computers.drop(columns=['Ending Time', 'Timestamp'])
###Output
_____no_output_____
###Markdown
Pickle dataframes for further use
###Code
df_bids_antiques.to_pickle("processeddata/bids_antiques.pkl")
df_bids_computers.to_pickle("processeddata/bids_computers.pkl")
###Output
_____no_output_____
###Markdown
> **How to run this notebook (command-line)?**1. Install the `ReinventCommunity` environment:`conda env create -f environment.yml`2. Activate the environment:`conda activate ReinventCommunity`3. Execute `jupyter`:`jupyter notebook`4. Copy the link to a browser `REINVENT 3.0`: Data Preparation demoThis demo illustrates how data from ChEMBL or other sources be processed, analysed and filtered. To proceed, please update the following code block such that it reflects your system's installation and execute it. Motivation> **There are a number of reasons to pre-process the data used for training a generative model.**1. Removal of invalid or duplicated entries.2. Removal of unusual compounds that are clearly not drug-like (too big, reactive groups and etc.). There is normally no point training model on such examples since that bias will reflected by the generative model. 3. Removal of rare tokens. There are rare compounds that can be seen as outliers. They in turn might contain rare tokens. Excluding them frees a slot in the vocabulary and makes it smaller. Smaller vocabulary means faster training and less memory. As a result removing compounds that introduce rare tokens to the vocabulary speeds up the generative model.
###Code
conda list
# load dependencieso
import os
import re
import json
import tempfile
import pyspark
#import findspark
#findspark.init()
###### assign memory to pyspark
#from pyspark import SparkContext
#SparkContext.setSystemProperty('spark.executor.memory', '2g')
# --------- change these path variables as required
DBS_PATH = "./data/chembl.raw.smi"
# --------- to be honest this isnt the exact raw version of ChEMBL
# it has been already put through some filtering
# we should provide the raw version here
# so that the actual filtering can be illustrated in the plots below
output_dir = os.path.expanduser("~/Desktop/Data_Preparation")
parquet_file = f'{output_dir}/chembl.parquet'
# --------- do not change
# get the notebook's root path
try: ipynb_path
except NameError: ipynb_path = os.getcwd()
# if required, generate a folder to store the results
try:
os.mkdir(output_dir)
except FileExistsError:
pass
###Output
_____no_output_____
###Markdown
We provide as an alternative a smaller dataset for testing purposesOne can use the cell below just to play with the code.If you intend to process the full dataset dont execute this cell
###Code
# DBS_PATH = "./data/chembl.mini.smi"
# parquet_file = f'{output_dir}/chembl.mini.parquet'
%matplotlib inline
import pyspark.sql as ps
import pyspark.sql.functions as psf
import pyspark.sql.types as pst
import rdkit.Chem as rkc
import rdkit.Chem.AllChem as rkac
import molvs as mv
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
%run code/data_preparation.py
pd.set_option('display.max_rows', 500)
pd.set_option('display.max_columns', 500)
pd.set_option('display.width', 1000)
pd.set_option('display.max_colwidth', 200)
sns.set(style="ticks")
SPARK, SC = SparkSessionSingleton.get("clean_db")
def to_mol(smi):
"""
Creates a Mol object from a SMILES string.
:param smi: SMILES string.
:return: A Mol object or None if it's not valid.
"""
if smi:
return rkc.MolFromSmiles(smi)
def to_smiles(mol):
"""
Converts a Mol object into a canonical SMILES string.
:param mol: Mol object.
:return: A SMILES string.
"""
if mol is None:
return None
return rkc.MolToSmiles(mol, isomericSmiles=False)
# standardize molecule
STANDARDIZER = mv.Standardizer()
ACCEPTED_ATOMS = [6,7,8,9,16,17,35]
def _run_reaction(mol, rxn):
while True:
results = rxn.RunReactants([mol], maxProducts=1)
if not results:
return mol
else:
mol = results[0][0]
REACTIONS = [
"[S+:1](=[N:3])[OH:2]>>[S+0:1](=[N:3])=[O:2]",
"[n+:1][OH:2]>>[n+:1][O-]",
"[N:1](=[O:2])=[O:3]>>[N+:1]([O-:2])=[O:3]",
"[S+:1]([O:2])[N:3]>>[S+0:1](=[O:2])[N:3]"
]
REACTIONS = [rkac.ReactionFromSmarts(rxn) for rxn in REACTIONS]
def standardize_mol(mol, standardize=True, min_size=0, max_size=1000):
try:
if standardize:
for rxn in REACTIONS:
mol = _run_reaction(mol, rxn)
mol = STANDARDIZER.charge_parent(mol, skip_standardize=True)
mol = STANDARDIZER.isotope_parent(mol, skip_standardize=True)
mol = STANDARDIZER.stereo_parent(mol, skip_standardize=True)
mol = STANDARDIZER.standardize(mol)
if any([atom.GetAtomicNum() not in ACCEPTED_ATOMS for atom in mol.GetAtoms()]):
return None
return mol
except:
return None
TOKENIZER = SMILESTokenizer()
tokenize_udf = psf.udf(lambda smi: TOKENIZER.tokenize(smi, with_begin_and_end=False), pst.ArrayType(pst.StringType()))
def _num_rings(smi):
mol = to_mol(smi)
if mol:
return rkc.GetSSSR(mol)
return None
num_rings_udf = psf.udf(_num_rings, pst.IntegerType())
def _size_largest_ring(smi):
mol = to_mol(smi)
if mol:
ring_info = mol.GetRingInfo()
return max([0] + [len(ring) for ring in ring_info.AtomRings()])
return None
size_largest_ring_udf = psf.udf(_size_largest_ring, pst.IntegerType())
num_atoms_udf = psf.udf(lambda smi: to_mol(smi).GetNumHeavyAtoms(), pst.IntegerType())
num_c_atoms_udf = psf.udf(lambda smi: len([atom for atom in to_mol(smi).GetAtoms() if atom.GetAtomicNum() == 6]), pst.IntegerType())
SMARTS_CHAINS = [rkc.MolFromSmarts("-".join(["[CR0H2]"]*i)) for i in range(1, 11)]
def _longest_aliphatic_c_chain(smi):
mol = to_mol(smi)
curr_chain = 0
for chain in SMARTS_CHAINS:
if mol.HasSubstructMatch(chain):
curr_chain += 1
else:
break
return curr_chain
longest_aliphatic_c_chain = psf.udf(_longest_aliphatic_c_chain, pst.IntegerType())
###Output
_____no_output_____
###Markdown
ChEMBL Remove Invalid SMILES
###Code
def _process_rows(row):
fields = row.split(" ")
mol = to_mol(fields[0])
standardized_smiles = None
if mol:
standardized_mol = standardize_mol(mol)
standardized_smiles = to_smiles(standardized_mol)
return ps.Row(original_smiles=fields[0], smiles=standardized_smiles)
chembl_df = SPARK.createDataFrame(SC.textFile(DBS_PATH).repartition(5000).map(_process_rows)).distinct().where("smiles is not null")
chembl_df.count()
###Output
_____no_output_____
###Markdown
Write down to a parquet file as a checkpoint.You can do that at multiple instances where the processing steps take while so that next time can resume from this checkpoint.
###Code
chembl_df.write.parquet(parquet_file)
###Output
_____no_output_____
###Markdown
Load from the checkpoint
###Code
chembl_df = SPARK.read.parquet(parquet_file)
###Output
_____no_output_____
###Markdown
Calculate various metrics for each SMILES entry
###Code
chembl_annotated_df = chembl_df\
.withColumn("num_atoms", num_atoms_udf("smiles"))\
.withColumn("c_atom_ratio", num_c_atoms_udf("smiles") / psf.col("num_atoms"))\
.withColumn("tokens", tokenize_udf("smiles"))\
.withColumn("num_rings", num_rings_udf("smiles"))\
.withColumn("size_largest_ring", size_largest_ring_udf("smiles"))\
.withColumn("num_tokens", psf.size("tokens"))\
.withColumn("tokens_atom_ratio", psf.col("num_tokens")/psf.col("num_atoms"))\
.withColumn("longest_aliph_c_chain", longest_aliphatic_c_chain("smiles"))\
.persist()
###Output
_____no_output_____
###Markdown
Data purgingIn the section below we look at various calculated parameters and apply some arbitrary criteria to eliminate entries that dont meet those. Num atoms distribution
###Code
num_atoms_dist = chembl_annotated_df\
.groupBy("num_atoms")\
.agg(psf.count("num_atoms").alias("num"))\
.withColumn("percent", psf.lit(100.0)*psf.col("num")/chembl_annotated_df.count())\
.sort("num_atoms", ascending=False)\
.toPandas()
num_atoms_dist.plot(x="num_atoms", y="percent", xlim=(0, 100), lw=3)
num_atoms_dist
chembl_chemistry_filtered_df = chembl_annotated_df.where("num_atoms >= 6 and num_atoms <= 70")
chembl_chemistry_filtered_df.count()
###Output
_____no_output_____
###Markdown
Number of rings
###Code
num_rings_dist = chembl_chemistry_filtered_df\
.groupBy("num_rings")\
.agg(psf.count("num_atoms").alias("num"))\
.withColumn("percent", psf.lit(100.0)*psf.col("num")/chembl_chemistry_filtered_df.count())\
.sort("num_rings", ascending=False)\
.toPandas()
num_rings_dist.plot(x="num_rings", y="percent", lw=3, xticks=num_rings_dist["num_rings"])
num_rings_dist
chembl_chemistry_filtered_df = chembl_chemistry_filtered_df.where("num_rings <= 10")
chembl_chemistry_filtered_df.count()
###Output
_____no_output_____
###Markdown
Size of largest ring
###Code
size_largest_ring_dist = chembl_chemistry_filtered_df\
.groupBy("size_largest_ring")\
.agg(psf.count("size_largest_ring").alias("num"))\
.withColumn("percent", psf.lit(100.0)*psf.col("num")/chembl_chemistry_filtered_df.count())\
.sort("size_largest_ring", ascending=False)\
.toPandas()
size_largest_ring_dist.plot(x="size_largest_ring", y="percent", lw=3)
chembl_chemistry_filtered_df = chembl_chemistry_filtered_df.where("size_largest_ring < 9")
chembl_chemistry_filtered_df.count()
###Output
_____no_output_____
###Markdown
Long aliphatic C chains
###Code
longest_aliph_c_chain = chembl_chemistry_filtered_df\
.groupBy("longest_aliph_c_chain")\
.agg(psf.count("longest_aliph_c_chain").alias("num"))\
.withColumn("percent", psf.lit(100.0)*psf.col("num")/chembl_chemistry_filtered_df.count())\
.sort("longest_aliph_c_chain", ascending=False)\
.toPandas()
longest_aliph_c_chain.plot(x="longest_aliph_c_chain", y="percent", lw=3)
longest_aliph_c_chain
chembl_chemistry_filtered_df = chembl_chemistry_filtered_df.where("longest_aliph_c_chain < 5")
chembl_chemistry_filtered_df.count()
###Output
_____no_output_____
###Markdown
Heteroatom ratios
###Code
c_ratio_dist = chembl_chemistry_filtered_df.sample(False, 0.1).toPandas()
c_ratio_dist.hist(column="c_atom_ratio", bins=32)
chembl_chemistry_filtered_df = chembl_chemistry_filtered_df.where("c_atom_ratio >= 0.5")
chembl_chemistry_filtered_df.count()
###Output
_____no_output_____
###Markdown
Number of tokens
###Code
num_tokens_dist = chembl_chemistry_filtered_df\
.groupBy("num_tokens")\
.agg(psf.count("num_tokens").alias("num"))\
.withColumn("percent", psf.lit(100.0)*psf.col("num")/chembl_chemistry_filtered_df.count())\
.sort("num_tokens", ascending=False)\
.toPandas()
num_tokens_dist.plot(x="num_tokens", y="percent", lw=3)
num_tokens_dist
chembl_filtered_df = chembl_chemistry_filtered_df.where("num_tokens <= 91")
chembl_filtered_df.count()
###Output
_____no_output_____
###Markdown
Tokens/atom ratio
###Code
tokens_atom_ratio_dist = chembl_filtered_df.sample(False, 0.1).toPandas()
tokens_atom_ratio_dist.hist(column="tokens_atom_ratio", bins=32)
chembl_filtered_df = chembl_filtered_df.where("tokens_atom_ratio <= 2.0")
chembl_filtered_df.count()
###Output
_____no_output_____
###Markdown
Token/molecule distribution
###Code
token_dist = chembl_filtered_df\
.withColumn("unique_tokens", psf.array_distinct("tokens"))\
.select(psf.explode("unique_tokens").alias("token"))\
.groupBy("token")\
.agg(psf.count("token").alias("num"))\
.withColumn("percent", psf.lit(100.0)*psf.col("num")/chembl_filtered_df.count())\
.sort("percent", ascending=False)\
.toPandas()
token_dist
tokens_to_remove = token_dist[(token_dist["percent"] < 5E-2) & (token_dist["token"].str.startswith("[")) & ~(token_dist["token"].isin(["[S+]", "[s+]"]))]["token"]
query_tokens = psf.lit(False)
for token in tokens_to_remove:
query_tokens |= psf.array_contains("tokens", token)
chembl_filtered_df = chembl_filtered_df.where(~query_tokens).select("original_smiles", "smiles")
chembl_filtered_df.count()
###Output
_____no_output_____
###Markdown
Write the filtered dataset to diskWe finally write out all SMILES that meet the filtering criteria to a csv file and to a parquet.
###Code
filtered_parquet_file = f'{output_dir}/final.filtered.parquet'
filtered_csv_file = f'{output_dir}/final.filtered.csv'
chembl_filtered_df.write.parquet(filtered_parquet_file)
chembl_filtered_df.select("smiles").toPandas().to_csv(filtered_csv_file, index=False, header=False)
###Output
_____no_output_____
###Markdown
Scoring.csv
###Code
scoring = pd.read_csv(os.path.join("..", "data", "Scoring.csv"))
mem_mib(scoring)
scoring.shape
scoring.columns
def recent_nhl_only(df):
return df[(df["lgID"] == "NHL") & (df["year"] >= 1980)]
scoring = recent_nhl_only(scoring)
scoring.shape
scoring.columns
scoring = scoring.filter(regex="^(?!(Post|PP|SH)).*")
scoring.columns
scoring = scoring.iloc[:, [0, 1, 3, 6, 7, 8, 9, 14]]
scoring.columns
make_categorical(scoring, "tmID")
scoring.head()
scoring.reset_index().head()
scoring = scoring.reset_index(drop=True)
# Alternatively:
scoring.reset_index(drop=True, inplace=True)
scoring.head()
scoring.to_pickle(os.path.join("..", "scoring.pickle"))
###Output
_____no_output_____
###Markdown
Teams.csv
###Code
teams = pd.read_csv(os.path.join("..", "data", "Teams.csv"))
teams.shape
teams.columns
teams = recent_nhl_only(teams)
teams = teams[["year", "tmID", "name"]]
teams.head()
teams.nunique()
make_categorical(teams, "tmID")
teams.to_pickle(os.path.join("..", "teams.pickle"))
###Output
_____no_output_____
###Markdown
TeamSplits.csv
###Code
team_splits = pd.read_csv(os.path.join("..", "data", "TeamSplits.csv"))
team_splits.shape
team_splits.columns
team_splits = recent_nhl_only(team_splits)
cols_to_drop = team_splits.columns[3:11]
team_splits = team_splits.drop(columns=cols_to_drop)
team_splits.columns
# some_data_frame.drop(rows=row_labels) <- to drop rows
team_splits = team_splits.drop(columns="lgID")
make_categorical(team_splits, "tmID")
team_splits.to_pickle(os.path.join("..", "team_splits.pickle"))
###Output
_____no_output_____
###Markdown
Entender Problema -- Objetivo do Problema: -- 1.0. Previsao do primeiro destino que um novo usuário irá escolher. -- Porque? -- Qual tipo de modelo de negócio do Airbnb? -- Marketplace ( Conectar pessoas que oferecem acomodacao, com pessoas que estao procurando acomodacao) -- Oferta ( pessoas oferecendo acomodacao ) -- Tamanho do portfólio. -- Diversidade/Densidade de Portfólio. -- Preco Medio -- Demanda ( pessoas procurando acomodacao ) -- Numero de Usuários -- LTV ( Lifetime Value ) -- CAC ( Client Acquisition Cost ) Gross Revenue = ( Fee*Numero cliente ) - CAC -- Proposta da Solucao --- Modelo de Predivao do primeiro destino de um novo usario. --- 1.0. Predicoes e salva em tabela do banco de dados. --- 2.0. API --- Input: usuario e suas caracteristicas --- Output: usuario e suas caracteristicas com a **predicao do destino** --- 16 ciclos 0.0. Imports
###Code
import random
import numpy as np
import pandas as pd
import seaborn as sns
from matplotlib import pyplot as plt
from sklearn import model_selection as ms
from sklearn import preprocessing as pp
from sklearn import metrics as m
from scikitplot import metrics as mt
from scipy import stats as ss
from imblearn import under_sampling as us
from imblearn import over_sampling as oversamp
from imblearn import combine as c
from category_encoders import TargetEncoder
from pandas_profiling import ProfileReport
from keras import models as ml
from keras import layers as l
###Output
_____no_output_____
###Markdown
0.1. Helper Functions
###Code
def cramer_v( x, y ):
cm = pd.crosstab( x, y ).values
n = cm.sum()
r, k = cm.shape
chi2 = ss.chi2_contingency( cm )[0]
chi2corr = max( 0, chi2 - (k-1)*(r-1)/(n-1) )
kcorr = k - (k-1)**2/(n-1)
rcorr = r - (r-1)**2/(n-1)
return np.sqrt( (chi2corr/n) / ( min( kcorr-1, rcorr-1 ) ) )
###Output
_____no_output_____
###Markdown
0.2. Loading Data
###Code
df_raw = pd.read_csv( 'dataset/training_users.csv', low_memory=True )
df_raw.shape
df_sessions = pd.read_csv( 'dataset/sessions.csv', low_memory=True )
df_sessions.shape
###Output
_____no_output_____
###Markdown
1.0. Data Description
###Code
df1 = df_raw.copy()
###Output
_____no_output_____
###Markdown
1.1. Data Dimension
###Code
print( 'Number of rows: {}'.format( df1.shape[0] ) )
print( 'Number of columns: {}'.format( df1.shape[1] ) )
print( 'Number of rows: {}'.format( df_sessions.shape[0] ) )
print( 'Number of columns: {}'.format( df_sessions.shape[1] ) )
###Output
Number of rows: 10567737
Number of columns: 6
###Markdown
1.2. Data Type
###Code
df1.dtypes
df_sessions.dtypes
###Output
_____no_output_____
###Markdown
1.3. NA Check
###Code
df1.isna().sum() / len( df1 )
df_sessions.isna().sum() / len( df_sessions)
# remove missing value completly
#df1 = df1.dropna()
# ========== User =================
# date_first_booking
date_first_booking_max = pd.to_datetime( df1['date_first_booking'] ).max().strftime( '%Y-%m-%d' )
df1['date_first_booking'] = df1['date_first_booking'].fillna( date_first_booking_max )
# age
df1 = df1[( df1['age'] > 15 ) & ( df1['age'] < 120 )]
avg_age = df1['age'].mean().astype( int )
df1['age'] = df1['age'].fillna( avg_age )
# first_affiliate_tracked
df1 = df1[~df1['first_affiliate_tracked'].isna()]
# ========== Sessions =================
# user_id - 0.3%
df_sessions = df_sessions[~df_sessions['user_id'].isna()]
# action - 0.7%
df_sessions = df_sessions[~df_sessions['action'].isna()]
# action_type - 11%
df_sessions = df_sessions[~df_sessions['action_type'].isna()]
# action_detail - 11%
df_sessions = df_sessions[~df_sessions['action_detail'].isna()]
# secs_elapsed - 1.2%
df_sessions = df_sessions[~df_sessions['secs_elapsed'].isna()]
df1.isna().sum() / len( df1 )
df_sessions.isna().sum() / len( df_sessions)
###Output
_____no_output_____
###Markdown
1.4. Change Data Type
###Code
df1.dtypes
# date_account_created
df1['date_account_created'] = pd.to_datetime( df1['date_account_created'] )
# timestamp_first_active
df1['timestamp_first_active'] = pd.to_datetime( df1['timestamp_first_active'], format='%Y%m%d%H%M%S' )
# date_first_booking
df1['date_first_booking'] = pd.to_datetime( df1['date_first_booking'] )
# age
df1['age'] = df1['age'].astype( int )
###Output
_____no_output_____
###Markdown
1.5. Check Balanced Data
###Code
#df1['country_destination'].value_counts( normalize=True )
df1['country_destination'].value_counts()
###Output
_____no_output_____
###Markdown
1.6. Descriptive Analysis
###Code
# Users
num_attributes = df1.select_dtypes( include=['int64', 'float64'] )
cat_attributes = df1.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] )
time_attributes = df1.select_dtypes( include=['datetime64[ns]'] )
# Sessions
num_attributes_sessions = df_sessions.select_dtypes( include=['int64', 'float64'] )
cat_attributes_sessions = df_sessions.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] )
time_attributes_sessions = df_sessions.select_dtypes( include=['datetime64[ns]'] )
###Output
_____no_output_____
###Markdown
1.6.1. Numerical - Users
###Code
# Central Tendency - Mean, Mediana
ct1 = pd.DataFrame( num_attributes.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes.apply( np.median ) ).T
# Dispersions - Std, Min, Max, Range, Skew, Kurtosis
d1 = pd.DataFrame( num_attributes.apply( np.std ) ).T
d2 = pd.DataFrame( num_attributes.apply( min ) ).T
d3 = pd.DataFrame( num_attributes.apply( max ) ).T
d4 = pd.DataFrame( num_attributes.apply( lambda x: x.max() - x.min() ) ).T
d5 = pd.DataFrame( num_attributes.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame( num_attributes.apply( lambda x: x.kurtosis() ) ).T
# Concatenar
ct = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
ct.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
ct
###Output
_____no_output_____
###Markdown
1.6.2. Numerical - Sessions
###Code
# Central Tendency - Mean, Mediana
ct1 = pd.DataFrame( num_attributes_sessions.apply( np.mean ) ).T
ct2 = pd.DataFrame( num_attributes_sessions.apply( np.median ) ).T
# Dispersions - Std, Min, Max, Range, Skew, Kurtosis
d1 = pd.DataFrame( num_attributes_sessions.apply( np.std ) ).T
d2 = pd.DataFrame( num_attributes_sessions.apply( min ) ).T
d3 = pd.DataFrame( num_attributes_sessions.apply( max ) ).T
d4 = pd.DataFrame( num_attributes_sessions.apply( lambda x: x.max() - x.min() ) ).T
d5 = pd.DataFrame( num_attributes_sessions.apply( lambda x: x.skew() ) ).T
d6 = pd.DataFrame( num_attributes_sessions.apply( lambda x: x.kurtosis() ) ).T
# Concatenar
ct = pd.concat( [d2, d3, d4, ct1, ct2, d1, d5, d6] ).T.reset_index()
ct.columns = ['attributes', 'min', 'max', 'range', 'mean', 'median', 'std', 'skew', 'kurtosis']
ct
###Output
_____no_output_____
###Markdown
1.6.3. Categorial - Users
###Code
cat_attributes.drop( 'id', axis=1 ).describe()
###Output
_____no_output_____
###Markdown
1.6.4. Categorial - Sesssions
###Code
cat_attributes_sessions.drop( 'user_id', axis=1 ).describe()
###Output
_____no_output_____
###Markdown
1.6.5. Correlation Matrix - Sessions
###Code
cat_attributes_list = cat_attributes_sessions.drop( 'user_id', axis=1 ).columns.tolist()
corr_dict = {}
for i in range( len ( cat_attributes_list ) ):
corr_list = []
for j in range( len( cat_attributes_list ) ):
ref = cat_attributes_list[i]
feat = cat_attributes_list[j]
# correlation
corr = cramer_v( cat_attributes_sessions[ ref ], cat_attributes_sessions[ feat ] )
# append a list
corr_list.append( corr )
# appende a correlation list for each ref attributs
corr_dict[ ref ] = corr_list
d = pd.DataFrame( corr_dict )
d = d.set_index( d.columns)
sns.heatmap( d, annot=True )
###Output
_____no_output_____
###Markdown
2.0. Feature Engineering
###Code
df2 = df1.copy()
df2.shape
df2.dtypes
###Output
_____no_output_____
###Markdown
2.1. Create New Features
###Code
# days from first active up to first booking
df2['first_active'] = pd.to_datetime( df2['timestamp_first_active'].dt.strftime( '%Y-%m-%d' ) )
df2['days_from_first_active_until_booking'] = ( df2['date_first_booking'] - df2['first_active'] ).apply( lambda x: x.days )
# days from first active upt to account created
df2['days_from_first_active_until_account_created'] = ( df2['date_account_created'] - df2['first_active'] ).apply( lambda x: x.days )
# days from account createad up to first booking
df2['days_from_account_created_until_first_booking'] = ( df2['date_first_booking'] - df2['date_account_created'] ).apply( lambda x: x.days )
# ================== First Active ==================
# year first active
df2['year_first_active'] = df2['first_active'].dt.year
# month first active
df2['month_first_active'] = df2['first_active'].dt.month
# day first active
df2['day_first_active'] = df2['first_active'].dt.day
# day of week first active
df2['day_of_week_first_active'] = df2['first_active'].dt.dayofweek
# week of year first active
df2['week_of_year_first_active'] = df2['first_active'].dt.weekofyear
# ================== First Booking ==================
# year first booking
df2['year_first_booking'] = df2['date_first_booking'].dt.year
# month first booking
df2['month_first_booking'] = df2['date_first_booking'].dt.month
# day first booking
df2['day_first_booking'] = df2['date_first_booking'].dt.day
# day of week first booking
df2['day_of_week_first_booking'] = df2['date_first_booking'].dt.dayofweek
# week of year first booking
df2['week_of_year_first_booking'] = df2['date_first_booking'].dt.weekofyear
# ================== First Account Created =================
# year first booking
df2['year_account_created'] = df2['date_account_created'].dt.year
# month account_created
df2['month_account_created'] = df2['date_account_created'].dt.month
# day account_created
df2['day_account_created'] = df2['date_account_created'].dt.day
# day of week account_created
df2['day_of_week_account_created'] = df2['date_account_created'].dt.dayofweek
# week of year account_created
df2['week_of_year_account_created'] = df2['date_account_created'].dt.weekofyear
df2.shape
df2[['id', 'date_account_created', 'day_account_created', 'day_first_booking']].sample(10)
###Output
_____no_output_____
###Markdown
3.0. Data Filtering
###Code
a = [2100, 3500, 4000, 8000, 10000, 16000]
b = [19, 28, 29, 30, 31, 32]
( 19 - np.mean( b ) ) / np.std( b )
np.mean( b )
np.std( b )
df3 = df2.copy()
df3.shape
###Output
_____no_output_____
###Markdown
3.1. Filtering Rows
###Code
# Filtering rows:
# age - greater than 15 and lower than 120 - There are few people over 12O year old
df3 = df3[( df3['age'] > 15 ) & ( df3['age'] < 120 )]
# secs_elapsed - there is no possible 0 secs elapsed on website
#df3 = df3[df3['secs_elapsed'] > 0]
###Output
_____no_output_____
###Markdown
3.2. Columns Selection
###Code
cols = ['date_account_created', 'date_account_created', 'date_first_booking', 'timestamp_first_active',
'first_active'] # original datetime
###Output
_____no_output_____
###Markdown
4.0. Balanced Dataset
###Code
df4 = df3.drop( cols, axis=1 )
df4.shape
# Encoder Categorical Variables
ohe = pp.OneHotEncoder()
# Numerical
col_num = df4.select_dtypes( include=['int64', 'float64'] ).columns.tolist()
# Categorical
col_cat = df4.select_dtypes( exclude=['int64', 'float64', 'datetime64[ns]'] ).drop( ['id', 'country_destination'], axis=1 ).columns.tolist()
# encoding
df4_dummy = pd.DataFrame( ohe.fit_transform( df4[ col_cat] ).toarray(), index=df4.index )
# join numerical and categorical
df42 = pd.concat( [df4[col_num], df4_dummy], axis=1 )
df42.shape
###Output
_____no_output_____
###Markdown
4.1. Random Undersampling
###Code
# ratio_balanced
ratio_balanced = {'NDF': 10000 }
# define sampler
undersampling = us.RandomUnderSampler( sampling_strategy=ratio_balanced, random_state=32 )
# apply sampler
X_under, y_under = undersampling.fit_resample( df42, df4['country_destination'] )
df4['country_destination'].value_counts()
y_under.value_counts()
###Output
_____no_output_____
###Markdown
4.2. Random Oversampling
###Code
# ratio_balanced
#ratio_balanced = {'NDF': 10000 }
# define sampler
oversampling = oversamp.RandomOverSampler( sampling_strategy='all', random_state=32 )
# apply sampler
X_over, y_over = oversampling.fit_resample( df42, df4['country_destination'] )
df4['country_destination'].value_counts()
y_over.value_counts()
###Output
_____no_output_____
###Markdown
4.3. SMOTE + TOMEKLINK
###Code
ratio_balanced = {'NDF': 54852,
'US': 48057,
'other': 6*7511,
'FR': 12*3669,
'IT': 20*2014,
'GB': 30*1758,
'ES': 30*1685,
'CA': 40*1064,
'DE': 45*841,
'NL': 80*595,
'AU': 85*433,
'PT': 300*157}
# define sampler
smt = c.SMOTETomek( sampling_strategy=ratio_balanced, random_state=32, n_jobs=-1 )
# apply sampler
X_smt, y_smt = smt.fit_resample( df42, df4['country_destination'] )
df4['country_destination'].value_counts()
y_smt.value_counts()
# numerical data
df43 = X_smt[ col_num ]
# categorical data
df44 = X_smt.drop( col_num, axis=1 )
df45 = pd.DataFrame( ohe.inverse_transform( df44 ), columns=col_cat, index=df44.index )
# join numerical categorical
df46 = pd.concat( [df43, df45], axis=1 )
df46['country_destination'] = y_smt
###Output
_____no_output_____
###Markdown
5.0. Exploratory Data Analysis ( EDA ) 5.1. Hypothesys Validation ( Unbalanced Dataset )
###Code
df51 = df4.copy()
###Output
_____no_output_____
###Markdown
**H01.** Em todos os destinos, os usuários levam 15 dias, em média, para fazer a primeira reserva no Airbnb, desde sua primeira ativacao.**Verdadeiro.** Em todos os destinos, os usuários até 6 dias para reservar o primeiro Airbnb
###Code
plt.figure( figsize=(20, 12))
plt.subplot( 3, 1, 1 )
aux01 = df51[['days_from_first_active_until_booking', 'country_destination']].groupby( 'country_destination' ).median().reset_index()
sns.barplot( x='country_destination', y='days_from_first_active_until_booking',
data=aux01.sort_values( 'days_from_first_active_until_booking' ) )
# remove outlier
plt.subplot( 3, 1, 2 )
aux02 = df51[df51['country_destination'] != 'NDF']
aux02 = aux02[['days_from_first_active_until_booking', 'country_destination']].groupby( 'country_destination' ).median().reset_index()
sns.barplot( x='country_destination', y='days_from_first_active_until_booking',
data=aux02.sort_values( 'days_from_first_active_until_booking' ) )
###Output
_____no_output_____
###Markdown
**H02.** Em todos os destinos, os usuários levam 3 dias, em média, para fazer o cadastro no site.**Verdadeira**. Em todos os destinos, os usuários levam até 2 dias para finalizar o cadastro
###Code
plt.figure( figsize=(20, 12))
aux01 = df51[['days_from_first_active_until_account_created', 'country_destination']].groupby( 'country_destination' ).mean().reset_index()
sns.barplot( x='country_destination', y='days_from_first_active_until_account_created',
data=aux01.sort_values( 'days_from_first_active_until_account_created' ) )
###Output
_____no_output_____
###Markdown
**H03.** O volume de reservas anual feitas durante o verão aumentaram 20% para destinos dentro dos USA. **False**. O Volume de reservas aumenta durante o verão entre os anos de 2010 até 2013.
###Code
aux01 = df51[['year_first_booking', 'month_first_booking', 'country_destination']].\
groupby( ['year_first_booking', 'month_first_booking', 'country_destination'] ). \
size().reset_index().rename( columns={0:'count'})
# select only summer
aux01 = aux01[( aux01['month_first_booking'].isin( [6, 7, 8, 9] ) ) & (aux01['country_destination'] == 'US')]
aux02 = aux01[['year_first_booking', 'count']].groupby( 'year_first_booking' ).sum().reset_index()
aux02['delta'] = 100*aux02['count'].pct_change().fillna( 0 )
plt.figure( figsize=(20,12))
sns.barplot( x='year_first_booking', y='delta', data=aux02)
###Output
_____no_output_____
###Markdown
**H04.** Usuários do sexo feminino fazem 10% mais reservas para países fora dos USA. **H05.** O canal de Marketing Google representa 40% das reservas para países fora dos USA. **H06.** O destino dos USA representam mais de 20% em todos os canais. **H07.** A idade média das pessoas é de 35 anos em todos os destinos. **H08.** A porcentagem de usuários que usam o site na lingua inglês-americano para reservar acomodações em qualquer destino é maior que 90% **H09.** O número de reservas do Airbnb é crescente ou decrescente ao longo dos anos? **H10.** O número de reservas do Airbnb é crescente ao longo dos anos. 5.2. Variables Impact ( Balanced Dataset )
###Code
df52 = df4.copy()
###Output
_____no_output_____
###Markdown
5.2.1. Univariate Analysis
###Code
profile = ProfileReport( df52, title='Airbnb Booking' )
#profile.to_notebook_iframe()
profile.to_file( output_file='airbnb_booking_statistics_after_cleaning.html' )
# ===================== High Correlation =====================
# days_from_first_active_until_booking x days_from_account_created_until_first_booking
# Remove: days_from_first_active_until_booking
# year_first_active x year_account_created
# Remove: year_first_active
# month_first_active x month_account_created
# Remove: month_first_active
# day_first_active x day_account_created
# Remove: day_first_active
# day_of_week_first_active x day_of_week_account_created
# Remove: day_of_week_first_active
# week_of_year_first_active x week_of_year_account_created
# Remove: week_of_year_first_active
# month_first_booking x week_of_year_first_booking
# Remove: month_first_booking
# month_account_created x week_of_year_account_created
# Remove: month_account_created
# year_first_booking x year_account_created
# Remove: year_first_booking
# week_of_year_first_booking x week_of_year_account_created
# Remove: week_of_year_first_booking
# affiliate_channel x affiliate_provider
# Remove: affiliate_provider
# first_device_type x first_browser
# Remove: first_browser
#first_device_type x sigup_app
#Remove: first_device_type
###Output
_____no_output_____
###Markdown
5.2.2. Bivariate Analysis 5.2.3. Multivariate Analysis
###Code
cols = ['days_from_first_active_until_booking', 'year_first_active', 'month_first_active', 'day_first_active',
'day_of_week_first_active', 'week_of_year_first_active', 'month_first_booking', 'month_account_created',
'year_first_booking', 'week_of_year_first_booking', 'affiliate_provider',
'first_browser', 'first_device_type', 'language'] # high correlation
###Output
_____no_output_____
###Markdown
6.0. Data Preparation
###Code
df6 = df46.drop( cols, axis=1 )
df6.shape
df6.dtypes
###Output
_____no_output_____
###Markdown
6.1. Rescaling
###Code
ss = pp.StandardScaler()
rs = pp.RobustScaler()
mms = pp.MinMaxScaler()
# age - Standardization
df6['age'] = ss.fit_transform( df6[['age']].values )
# signup_flow - Robust Scaler
df6['signup_flow'] = rs.fit_transform( df6[['signup_flow']].values )
# days_from_first_active_until_account_created - Robust Scaler
df6['days_from_first_active_until_account_created'] = rs.fit_transform( df6[['days_from_first_active_until_account_created']].values )
# days_from_account_created_until_first_booking - Robust Scaler
df6['days_from_account_created_until_first_booking'] = rs.fit_transform( df6[['days_from_account_created_until_first_booking']].values )
# year_account_created - MinMax Scaler
df6['year_account_created'] = mms.fit_transform( df6[['year_account_created']].values )
###Output
_____no_output_____
###Markdown
6.2. Encoding
###Code
te = TargetEncoder()
# gender - One Hot Encoder
df6 = pd.get_dummies( df6, prefix=['gender'], columns=['gender'] )
# signup_method - One Hot Encoder
df6 = pd.get_dummies( df6, prefix=['signup_method'], columns=['signup_method'] )
# signup_app - One Hot Encoder
df6 = pd.get_dummies( df6, prefix=['signup_app'], columns=['signup_app'] )
# affiliate_channel - Target Encoder
c = {'NDF':0, 'US':1, 'other':2, 'CA':3, 'FR':4, 'IT':5, 'ES':6, 'GB':7, 'NL':8, 'DE':9, 'AU':10, 'PT':11}
df6['affiliate_channel'] = te.fit_transform( df6[['affiliate_channel']].values, df6['country_destination'].map( c ) )
# first_affiliate_tracked - Target Encoder
df6['first_affiliate_tracked'] = te.fit_transform( df6[['first_affiliate_tracked']].values, df6['country_destination'].map( c ) )
###Output
/Users/meigarom.lopes/.pyenv/versions/3.8.0/envs/airbnbpredictfirstbooking/lib/python3.8/site-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
/Users/meigarom.lopes/.pyenv/versions/3.8.0/envs/airbnbpredictfirstbooking/lib/python3.8/site-packages/category_encoders/utils.py:21: FutureWarning: is_categorical is deprecated and will be removed in a future version. Use is_categorical_dtype instead
elif pd.api.types.is_categorical(cols):
###Markdown
6.3. Transformation
###Code
# week_of_year_account_created
df6['week_of_year_account_created_sin'] = df6['week_of_year_account_created'].apply( lambda x: np.sin( x * (2*np.pi/52 ) ) )
df6['week_of_year_account_created_cos'] = df6['week_of_year_account_created'].apply( lambda x: np.cos( x * (2*np.pi/52 ) ) )
# day_of_week_first_booking
df6['day_of_week_first_booking_sin'] = df6['day_of_week_first_booking'].apply( lambda x: np.sin( x * (2*np.pi/7 ) ) )
df6['day_of_week_first_booking_cos'] = df6['day_of_week_first_booking'].apply( lambda x: np.cos( x * (2*np.pi/7 ) ) )
# day_account_created
df6['day_account_created_sin'] = df6['day_account_created'].apply( lambda x: np.sin( x * (2*np.pi/31 ) ) )
df6['day_account_created_cos'] = df6['day_account_created'].apply( lambda x: np.cos( x * (2*np.pi/31 ) ) )
# day_of_week_account_created
df6['day_of_week_account_created_sin'] = df6['day_of_week_account_created'].apply( lambda x: np.sin( x * (2*np.pi/7 ) ) )
df6['day_of_week_account_created_cos'] = df6['day_of_week_account_created'].apply( lambda x: np.cos( x * (2*np.pi/7 ) ) )
###Output
_____no_output_____
###Markdown
7.0. Feature Selection
###Code
df7 = df6.copy()
X = df6.drop( 'country_destination', axis=1 )
y = df6['country_destination'].copy()
# Split dataset into training and test
x_train, x_test, y_train, y_test = ms.train_test_split( X, y, test_size=0.2, random_state=32 )
###Output
_____no_output_____
###Markdown
8.0. Machine Learning Model 8.1. Baseline Model
###Code
country_destination_list = df1['country_destination'].drop_duplicates().sort_values().tolist()
k_num = y_test.shape[0]
country_destination_weights = df1['country_destination'].value_counts( normalize=True ).sort_index().tolist()
yhat_random = random.choices( population=country_destination_list,
weights=country_destination_weights,
k=k_num )
###Output
_____no_output_____
###Markdown
8.1.1. Baseline Model Performance
###Code
# Accuracy
acc_random = m.accuracy_score( y_test, yhat_random )
print( 'Accuracy: {}'.format( acc_random ) )
# Balanced Accuray
balanced_acc_random = m.balanced_accuracy_score( y_test, yhat_random )
print( 'Balanced Accuracy:{}'.format( balanced_acc_random ) )
# Kappa Metrics
kappa_random = m.cohen_kappa_score( y_test, yhat_random )
print( 'Kappa Score: {}'.format( kappa_random ) )
# Classification report
print( m.classification_report( y_test, yhat_random ) )
# Confusion Matrix
mt.plot_confusion_matrix( y_test, yhat_random, normalize=False, figsize=(12,12))
###Output
Accuracy: 0.09213223987698616
Balanced Accuracy:0.08310397046943117
Kappa Score: -0.00020319419528025406
precision recall f1-score support
AU 0.06 0.00 0.00 7470
CA 0.08 0.01 0.02 8517
DE 0.06 0.01 0.01 7462
ES 0.10 0.01 0.02 10003
FR 0.08 0.03 0.05 8741
GB 0.10 0.01 0.03 10489
IT 0.07 0.02 0.03 7962
NDF 0.10 0.45 0.17 11058
NL 0.08 0.00 0.01 9675
PT 0.10 0.00 0.00 9465
US 0.09 0.39 0.14 9435
other 0.08 0.06 0.07 8979
accuracy 0.09 109256
macro avg 0.08 0.08 0.04 109256
weighted avg 0.08 0.09 0.05 109256
###Markdown
8.2. Neural Network - MLP
###Code
ohe = pp.OneHotEncoder()
y_train_nn = ohe.fit_transform( y_train.values.reshape( -1, 1 ) ).toarray()
print( 'Number of Rows: {}'.format( x_train.shape[0] ) )
print( 'Number of Features: {}'.format( x_train.shape[1] ) )
print( 'Number of Classes: {}'.format( y_train.nunique() ) )
# model definition
model = ml.Sequential()
model.add( l.Dense( 64, input_dim=x_train.shape[1], activation='relu' ) )
model.add( l.Dense( 12, activation='softmax') )
# model compile
model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] )
# train model
model.fit( x_train, y_train_nn, epochs=100 )
###Output
Epoch 1/100
13657/13657 [==============================] - 10s 753us/step - loss: 2.1184 - accuracy: 0.2448
Epoch 2/100
13657/13657 [==============================] - 10s 766us/step - loss: 2.0116 - accuracy: 0.2881
Epoch 3/100
13657/13657 [==============================] - 10s 739us/step - loss: 1.9689 - accuracy: 0.3042
Epoch 4/100
13657/13657 [==============================] - 10s 746us/step - loss: 1.9419 - accuracy: 0.3145
Epoch 5/100
13657/13657 [==============================] - 10s 703us/step - loss: 1.9246 - accuracy: 0.3212
Epoch 6/100
13657/13657 [==============================] - 10s 698us/step - loss: 1.9136 - accuracy: 0.3246
Epoch 7/100
13657/13657 [==============================] - 10s 700us/step - loss: 1.9059 - accuracy: 0.3264
Epoch 8/100
13657/13657 [==============================] - 10s 728us/step - loss: 1.9000 - accuracy: 0.3283
Epoch 9/100
13657/13657 [==============================] - 10s 710us/step - loss: 1.8959 - accuracy: 0.3295
Epoch 10/100
13657/13657 [==============================] - 10s 706us/step - loss: 1.8914 - accuracy: 0.3302
Epoch 11/100
13657/13657 [==============================] - 10s 720us/step - loss: 1.8886 - accuracy: 0.3313
Epoch 12/100
13657/13657 [==============================] - 10s 711us/step - loss: 1.8859 - accuracy: 0.3320
Epoch 13/100
13657/13657 [==============================] - 10s 706us/step - loss: 1.8829 - accuracy: 0.3336
Epoch 14/100
13657/13657 [==============================] - 10s 723us/step - loss: 1.8799 - accuracy: 0.3349
Epoch 15/100
13657/13657 [==============================] - 10s 712us/step - loss: 1.8789 - accuracy: 0.3351
Epoch 16/100
13657/13657 [==============================] - 10s 711us/step - loss: 1.8758 - accuracy: 0.3362
Epoch 17/100
13657/13657 [==============================] - 10s 728us/step - loss: 1.8748 - accuracy: 0.3369
Epoch 18/100
13657/13657 [==============================] - 10s 714us/step - loss: 1.8728 - accuracy: 0.3368
Epoch 19/100
13657/13657 [==============================] - 10s 729us/step - loss: 1.8709 - accuracy: 0.3380s - loss: 1.8706 - accu
Epoch 20/100
13657/13657 [==============================] - 10s 755us/step - loss: 1.8699 - accuracy: 0.3377
Epoch 21/100
13657/13657 [==============================] - 10s 709us/step - loss: 1.8688 - accuracy: 0.3385
Epoch 22/100
13657/13657 [==============================] - 10s 717us/step - loss: 1.8672 - accuracy: 0.3392
Epoch 23/100
13657/13657 [==============================] - 10s 717us/step - loss: 1.8662 - accuracy: 0.3391
Epoch 24/100
13657/13657 [==============================] - 10s 723us/step - loss: 1.8652 - accuracy: 0.3396
Epoch 25/100
13657/13657 [==============================] - 10s 721us/step - loss: 1.8639 - accuracy: 0.3395
Epoch 26/100
13657/13657 [==============================] - 10s 725us/step - loss: 1.8628 - accuracy: 0.3403
Epoch 27/100
13657/13657 [==============================] - 10s 717us/step - loss: 1.8626 - accuracy: 0.3396
Epoch 28/100
13657/13657 [==============================] - 10s 744us/step - loss: 1.8610 - accuracy: 0.3409
Epoch 29/100
13657/13657 [==============================] - 10s 760us/step - loss: 1.8608 - accuracy: 0.3414
Epoch 30/100
13657/13657 [==============================] - 10s 769us/step - loss: 1.8607 - accuracy: 0.3416
Epoch 31/100
13657/13657 [==============================] - 10s 762us/step - loss: 1.8592 - accuracy: 0.3417
Epoch 32/100
13657/13657 [==============================] - 10s 755us/step - loss: 1.8583 - accuracy: 0.3416
Epoch 33/100
13657/13657 [==============================] - 10s 755us/step - loss: 1.8578 - accuracy: 0.3411
Epoch 34/100
13657/13657 [==============================] - 10s 718us/step - loss: 1.8569 - accuracy: 0.3411
Epoch 35/100
13657/13657 [==============================] - 10s 715us/step - loss: 1.8565 - accuracy: 0.3417
Epoch 36/100
13657/13657 [==============================] - 10s 725us/step - loss: 1.8560 - accuracy: 0.3414
Epoch 37/100
13657/13657 [==============================] - 10s 716us/step - loss: 1.8549 - accuracy: 0.3427
Epoch 38/100
13657/13657 [==============================] - 10s 746us/step - loss: 1.8542 - accuracy: 0.3426
Epoch 39/100
13657/13657 [==============================] - 10s 719us/step - loss: 1.8537 - accuracy: 0.3426
Epoch 40/100
13657/13657 [==============================] - 11s 783us/step - loss: 1.8532 - accuracy: 0.3421
Epoch 41/100
13657/13657 [==============================] - 11s 822us/step - loss: 1.8531 - accuracy: 0.3425
Epoch 42/100
13657/13657 [==============================] - 11s 838us/step - loss: 1.8522 - accuracy: 0.3426
Epoch 43/100
13657/13657 [==============================] - 12s 869us/step - loss: 1.8516 - accuracy: 0.3433
Epoch 44/100
13657/13657 [==============================] - 12s 866us/step - loss: 1.8505 - accuracy: 0.3432
Epoch 45/100
13657/13657 [==============================] - 12s 871us/step - loss: 1.8503 - accuracy: 0.3430
Epoch 46/100
13657/13657 [==============================] - 12s 852us/step - loss: 1.8504 - accuracy: 0.3428
Epoch 47/100
13657/13657 [==============================] - 11s 814us/step - loss: 1.8506 - accuracy: 0.3431
Epoch 48/100
13657/13657 [==============================] - 12s 852us/step - loss: 1.8492 - accuracy: 0.3441
Epoch 49/100
13657/13657 [==============================] - 11s 841us/step - loss: 1.8486 - accuracy: 0.3444
Epoch 50/100
13657/13657 [==============================] - 12s 853us/step - loss: 1.8487 - accuracy: 0.3436
Epoch 51/100
13657/13657 [==============================] - 12s 879us/step - loss: 1.8482 - accuracy: 0.3442
Epoch 52/100
13657/13657 [==============================] - 12s 868us/step - loss: 1.8473 - accuracy: 0.3445
Epoch 53/100
13657/13657 [==============================] - 12s 909us/step - loss: 1.8467 - accuracy: 0.3446
Epoch 54/100
13657/13657 [==============================] - 13s 918us/step - loss: 1.8467 - accuracy: 0.3445
Epoch 55/100
13657/13657 [==============================] - 12s 906us/step - loss: 1.8465 - accuracy: 0.3445
Epoch 56/100
13657/13657 [==============================] - 13s 917us/step - loss: 1.8463 - accuracy: 0.3449
Epoch 57/100
13657/13657 [==============================] - 12s 913us/step - loss: 1.8468 - accuracy: 0.3447
Epoch 58/100
13657/13657 [==============================] - 13s 988us/step - loss: 1.8464 - accuracy: 0.3451
Epoch 59/100
13657/13657 [==============================] - 14s 1ms/step - loss: 1.8453 - accuracy: 0.3453
Epoch 60/100
13657/13657 [==============================] - 15s 1ms/step - loss: 1.8456 - accuracy: 0.3447
Epoch 61/100
13657/13657 [==============================] - 15s 1ms/step - loss: 1.8445 - accuracy: 0.3454
Epoch 62/100
13657/13657 [==============================] - 15s 1ms/step - loss: 1.8444 - accuracy: 0.3451
Epoch 63/100
13657/13657 [==============================] - 16s 1ms/step - loss: 1.8436 - accuracy: 0.3456
Epoch 64/100
13657/13657 [==============================] - 16s 1ms/step - loss: 1.8436 - accuracy: 0.3449
Epoch 65/100
13657/13657 [==============================] - 16s 1ms/step - loss: 1.8437 - accuracy: 0.3458
Epoch 66/100
13657/13657 [==============================] - 12s 896us/step - loss: 1.8445 - accuracy: 0.3454
Epoch 67/100
13657/13657 [==============================] - 11s 811us/step - loss: 1.8436 - accuracy: 0.3458
Epoch 68/100
13657/13657 [==============================] - 10s 716us/step - loss: 1.8425 - accuracy: 0.3453
Epoch 69/100
13657/13657 [==============================] - 9s 687us/step - loss: 1.8421 - accuracy: 0.3457
Epoch 70/100
13657/13657 [==============================] - 9s 648us/step - loss: 1.8421 - accuracy: 0.3455
Epoch 71/100
13657/13657 [==============================] - 9s 638us/step - loss: 1.8421 - accuracy: 0.3461
Epoch 72/100
13657/13657 [==============================] - 9s 640us/step - loss: 1.8417 - accuracy: 0.3457
Epoch 73/100
13657/13657 [==============================] - 8s 621us/step - loss: 1.8419 - accuracy: 0.3461
Epoch 74/100
13657/13657 [==============================] - 9s 623us/step - loss: 1.8424 - accuracy: 0.3459
Epoch 75/100
13657/13657 [==============================] - 8s 616us/step - loss: 1.8409 - accuracy: 0.3471
Epoch 76/100
13657/13657 [==============================] - 9s 640us/step - loss: 1.8415 - accuracy: 0.3468
Epoch 77/100
13657/13657 [==============================] - 9s 644us/step - loss: 1.8409 - accuracy: 0.3462
Epoch 78/100
13657/13657 [==============================] - 9s 667us/step - loss: 1.8411 - accuracy: 0.3468
Epoch 79/100
13657/13657 [==============================] - 9s 665us/step - loss: 1.8411 - accuracy: 0.3466
Epoch 80/100
13657/13657 [==============================] - 9s 672us/step - loss: 1.8404 - accuracy: 0.3464
Epoch 81/100
13657/13657 [==============================] - 9s 668us/step - loss: 1.8406 - accuracy: 0.3471
Epoch 82/100
13657/13657 [==============================] - 9s 676us/step - loss: 1.8412 - accuracy: 0.3475
Epoch 83/100
13657/13657 [==============================] - 9s 680us/step - loss: 1.8407 - accuracy: 0.3464
Epoch 84/100
13657/13657 [==============================] - 9s 693us/step - loss: 1.8394 - accuracy: 0.3469
Epoch 85/100
13657/13657 [==============================] - 9s 694us/step - loss: 1.8396 - accuracy: 0.3473
Epoch 86/100
13657/13657 [==============================] - 9s 692us/step - loss: 1.8396 - accuracy: 0.3467
Epoch 87/100
13657/13657 [==============================] - 9s 685us/step - loss: 1.8402 - accuracy: 0.3469
Epoch 88/100
13657/13657 [==============================] - 9s 667us/step - loss: 1.8391 - accuracy: 0.3472
Epoch 89/100
13657/13657 [==============================] - 9s 663us/step - loss: 1.8388 - accuracy: 0.3477
Epoch 90/100
13657/13657 [==============================] - 9s 665us/step - loss: 1.8388 - accuracy: 0.3470
Epoch 91/100
13657/13657 [==============================] - 9s 663us/step - loss: 1.8386 - accuracy: 0.3470
Epoch 92/100
13657/13657 [==============================] - 9s 684us/step - loss: 1.8390 - accuracy: 0.3478
Epoch 93/100
13657/13657 [==============================] - 10s 704us/step - loss: 1.8389 - accuracy: 0.3476
Epoch 94/100
13657/13657 [==============================] - 10s 706us/step - loss: 1.8387 - accuracy: 0.3466
Epoch 95/100
13657/13657 [==============================] - 10s 755us/step - loss: 1.8381 - accuracy: 0.3480
Epoch 96/100
13657/13657 [==============================] - 10s 761us/step - loss: 1.8385 - accuracy: 0.3482
Epoch 97/100
13657/13657 [==============================] - 12s 852us/step - loss: 1.8379 - accuracy: 0.3474
Epoch 98/100
13657/13657 [==============================] - 16s 1ms/step - loss: 1.8377 - accuracy: 0.3476 0s - loss: 1.8376 - accuracy
Epoch 99/100
13657/13657 [==============================] - 13s 983us/step - loss: 1.8384 - accuracy: 0.3480
Epoch 100/100
13657/13657 [==============================] - 13s 956us/step - loss: 1.8381 - accuracy: 0.3473
###Markdown
7.2.1. NN Performance
###Code
# prediction
pred_nn = model.predict( x_test )
# invert prediction
yhat_nn = ohe.inverse_transform( pred_nn )
# prediction prepare
y_test_nn = y_test.to_numpy()
yhat_nn = yhat_nn.reshape( 1, -1 )[0]
# Accuracy
acc_nn = m.accuracy_score( y_test_nn, yhat_nn )
print( 'Accuracy: {}'.format( acc_nn ) )
# Balanced Accuray
balanced_acc_nn = m.balanced_accuracy_score( y_test_nn, yhat_nn )
print( 'Balanced Accuracy:{}'.format( balanced_acc_nn ) )
# Kappa Metrics
kappa_nn = m.cohen_kappa_score( y_test_nn, yhat_nn )
print( 'Kappa Score: {}'.format( kappa_nn ) )
# Classification report
print( m.classification_report( y_test_nn, yhat_nn ) )
# Confusion Matrix
mt.plot_confusion_matrix( y_test_nn, yhat_nn, normalize=False, figsize=(12,12))
###Output
Accuracy: 0.35109284615947867
Balanced Accuracy:0.33314777646607335
Kappa Score: 0.29028000162482237
precision recall f1-score support
AU 0.32 0.37 0.34 7470
CA 0.20 0.21 0.20 8517
DE 0.21 0.14 0.17 7462
ES 0.20 0.19 0.19 10003
FR 0.15 0.07 0.10 8741
GB 0.19 0.15 0.17 10489
IT 0.16 0.08 0.10 7962
NDF 1.00 1.00 1.00 11058
NL 0.24 0.50 0.32 9675
PT 0.60 0.92 0.73 9465
US 0.26 0.30 0.28 9435
other 0.17 0.07 0.10 8979
accuracy 0.35 109256
macro avg 0.31 0.33 0.31 109256
weighted avg 0.32 0.35 0.32 109256
###Markdown
7.2.2. NN Performance - Cross-Validation
###Code
# generate k-fold
num_folds = 5
kfold = ms.StratifiedKFold( n_splits=num_folds, shuffle=True, random_state=32 )
balanced_acc_list = []
kappa_acc_list = []
i = 1
for train_ix, val_ix in kfold.split( x_train, y_train ):
print( 'Fold Number: {}/{}'.format( i, num_folds ) )
# get fold
x_train_fold = x_train.iloc[train_ix]
y_train_fold = y_train.iloc[train_ix]
x_val_fold = x_train.iloc[val_ix]
y_val_fold = y_train.iloc[val_ix]
# target hot-encoding
ohe = pp.OneHotEncoder()
y_train_fold_nn = ohe.fit_transform( y_train_fold.values.reshape( -1, 1 ) ).toarray()
# model definition
model = ml.Sequential()
model.add( l.Dense( 256, input_dim=x_train.shape[1], activation='relu' ) )
model.add( l.Dense( 12, activation='softmax') )
# compile model
model.compile( loss='categorical_crossentropy', optimizer='adam', metrics=['accuracy'] )
# training model
model.fit( x_train_fold, y_train_fold_nn, epochs=100, batch_size=32, verbose=0 )
# prediction
pred_nn = model.predict( x_val_fold )
yhat_nn = ohe.inverse_transform( pred_nn )
# prepare data
y_test_nn = y_val_fold.to_numpy()
yhat_nn = yhat_nn.reshape( 1, -1 )[0]
# metrics
## Balanced Accuracy
balanced_acc_nn = m.balanced_accuracy_score( y_test_nn, yhat_nn )
balanced_acc_list.append( balanced_acc_nn )
## Kappa Metrics
kappa_acc_nn = m.cohen_kappa_score( y_test_nn, yhat_nn )
kappa_acc_list.append( kappa_acc_nn )
i += 1
print( 'Avg Balanced Accuracy: {} +/- {}'.format( np.round( np.mean( balanced_acc_list ), 4 ), np.round( np.std( balanced_acc_list ), 4 ) ) )
print( 'Avg Kappa: {} +/- {}'.format( np.round( np.mean( kappa_acc_list ), 4 ), np.round( np.std( kappa_acc_list ), 4 ) ) )
###Output
Avg Balanced Accuracy: 0.1666 +/- 0.0001
Avg Kappa: 0.724 +/- 0.0006
###Markdown
DATA PREPARATION Unduh Data : https://github.com/realpython/python-data-cleaning/blob/master/Datasets/BL-Flickr-Images-Book.csv 1. import library yang dibutuhkan
###Code
import pandas as pd
import numpy as np
df = pd.read_csv('BL-Flickr-Images-Book.csv')
data.head()
yg_dihapus = ['Edition Statement','Corporate Author']
df.drop(yg_dihapus, inplace=True, axis=1)
df.head()
data['Identifier'].is_unique
df = df.set_index('Identifier')
df.head()
df.loc[472]
###Output
_____no_output_____
###Markdown
3. merapikan fields
###Code
df.dtypes.value_counts()
df.loc[1905:,'Date of Publication'].head(15)
###Output
_____no_output_____
###Markdown
~ hilangkan tgl lain dalam kurung siku~ hilangkan rentang tanggal~ hilangkan tanggal yang gajelas[1897?]--> NaN~ konversi NaN
###Code
regex = r'^(\d{4})'
ekstrak = df['Date of Publication'].str.extract(r'^(\d{4})',expand=False)
df.loc[667]
df.loc[4157862]
df['Place of Publication'].tail(15)
df.loc[4115138]
publikasi = df['Place of Publication']
london = publikasi.str.contains('London')
london[:5]
oxford = publikasi.str.contains ('Oxford')
df['Place of Publication'] = np.where(london,'London',
np.where(oxford, 'Oxford',
publikasi.str.replace ('-', ' ')))
df['Place of Publication'].head(15)
###Output
_____no_output_____
###Markdown
DATA SET BARU 5. membersihkan dataset dengan applymap
###Code
university_town = []
with open("university_towns.txt") as file:
for line in file:
if '[edit]'in line:
state = line
else:
university_town.append((state,line))
university_town[:5]
df_kota = pd.DataFrame(university_town, columns=['State','RegionName'])
df_kota.head(15)
def get_citystate(item):
if '(' in item:
return item[:item.find('(')]
elif '[' in item:
return item[:item.find('[')]
else:
return item
df_kota = df_kota.applymap(get_citystate)
df_kota.head
###Output
_____no_output_____
###Markdown
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
import seaborn as sns
%matplotlib inline
movies = pd.read_csv('https://github.com/JuanPabloMF/datasets-platzi-course/blob/master/datasets/peliculas.csv?raw=true',encoding='utf-8')
movies.head()
movies.shape
movies.columns
movies.index
columna1 = movies['movie_title']
columna1.head()
line = movies.loc[10,:]
line
movies.loc[:,'movie_title'].head()
movies.info()
movies.dtypes == float
movies.dtypes == int
movies.dtypes == object
num = (movies.dtypes == float) | (movies.dtypes == int)
num
num.index
for i in num.index:
print(i)
num_cols = [x for x in num.index if num[x]]
num_cols
movies.dtypes == object
obj = (movies.dtypes == object)
obj_cols = [c for c in obj.index if obj[c]]
obj_cols
num_cols
movies_num = movies[num_cols]
movies_num.describe()
movies_num['duration'].hist()
movies_num['imdb_score'].hist()
movies_num['budget'].hist()
mask = (movies_num['budget'] > 1e9)
movies[mask]
financial = pd.read_csv('https://github.com/JuanPabloMF/datasets-platzi-course/blob/master/datasets/thenumbers.csv?raw=true',encoding='utf-8')
financial.head(5)
financial = financial[['movie_title','production_budget','worldwide_gross']]
gross_opening = pd.read_csv('https://github.com/JuanPabloMF/datasets-platzi-course/blob/master/datasets/opening_df.csv?raw=true')
financial.shape
movies.shape
movies['movie_title']
movies_num
movies_num = movies_num.loc[:,~movies_num.columns.duplicated()]
movies_num = pd.concat([movies_num, movies['movie_title']],axis=1)
gross_opening = gross_opening.drop('Unnamed: 0',axis=1)
movies_v2 = pd.merge(financial,movies_num,on='movie_title',how='left')
movies_v2 = pd.merge(movies_v2,gross_opening,on='movie_title',how='left')
movies_v2.shape
movies_v2.notnull().apply(pd.Series.value_counts)
(movies_v2 != 0).apply(pd.Series.value_counts)
available = ((movies_v2 != 0) & (movies_v2.notnull()))
available.all(axis=1).value_counts()
mask = available['worldwide_gross']
movies_v2 = movies_v2[mask]
((movies_v2 != 0) & (movies_v2.notnull())).worldwide_gross.value_counts()
movies_v2 = movies_v2.drop('movie_title',axis=1)
movies_v2 = movies_v2.drop('duration',axis=1)
movies_v2 = movies_v2.drop('gross',axis=1)
movies_v2.head()
movies_v2 = movies_v2[available.screens]
len(movies_v2)
from sklearn.impute import SimpleImputer
imputer = SimpleImputer(missing_values=np.nan, strategy='mean')
values = imputer.fit_transform(movies_v2)
X = pd.DataFrame(values)
X.columns = movies_v2.columns
X.index = movies_v2.index
X.head()
len(X)
movies_v2.values
values
X.to_csv('/content/drive/My Drive/Colab Notebooks/db/X_opening.csv',index=False)
###Output
_____no_output_____
###Markdown
VUmc Research Project - Reinforcement Learning for Sepsis Prevention Data PreparationAmsterdamUMCdb version 1.0.2 March 2020 Copyright © 2003-2022 Amsterdam UMC - Amsterdam Medical Data Science 1. Clustering
###Code
Sum_of_squared_distances = []
K = range(2,500, 5)
for k in K:
km = KMeans(n_clusters=k)
km = km.fit(space[['Kalium (bloed)', 'ABP gemiddeld', 'Kreatinine (bloed)', 'Natrium (bloed)', 'UrineCAD', 'UrineSupraPubis', 'UrineSpontaan',
'UrineUP',
'Kreatinine',
'Nefrodrain re Uit',
'Nefrodrain li Uit',
'UrineIncontinentie',
'gender_Vrouw',
'agegroup',
'AKI']])
Sum_of_squared_distances.append(km.inertia_)
plt.plot(K, Sum_of_squared_distances, 'b-')
plt.xlabel('k')
plt.ylabel('Sum_of_squared_distances')
plt.title('Elbow Method For Optimal k')
plt.show()
from sklearn.cluster import KMeans
from sklearn.metrics import silhouette_samples, silhouette_score
def k_means_over_instances(dataset, cols, k, max_iters, n_inits):
# Take the appropriate columns.
temp_dataset = dataset[cols]
# Now apply the k-means algorithm
kmeans = KMeans(n_clusters=k, max_iter=max_iters, n_init=n_inits, random_state=0).fit(temp_dataset)
# Add the labels to the dataset
dataset['cluster'] = kmeans.labels_
# Compute the solhouette and add it as well.
silhouette_avg = silhouette_score(temp_dataset, kmeans.labels_)
silhouette_per_inst = silhouette_samples(temp_dataset, kmeans.labels_)
dataset['silhouette'] = silhouette_per_inst
return dataset, silhouette_avg
# Use k=50 based on previous runs
new_d, sil = k_means_over_instances(space, ['Kalium (bloed)', 'ABP gemiddeld', 'Kreatinine (bloed)', 'Natrium (bloed)', 'UrineCAD', 'UrineSupraPubis', 'UrineSpontaan',
'UrineUP',
'Kreatinine',
'Nefrodrain re Uit',
'Nefrodrain li Uit',
'UrineIncontinentie',
'gender_Vrouw',
'agegroup',
'AKI'], 50, 20, 10)
###Output
_____no_output_____
###Markdown
2. Bin Values
###Code
# Binning Values
binsv = [-np.inf, 0, new_d['Noradrenaline (Norepinefrine)'].median(), np.inf]
binsf = [-np.inf, 250, new_d['NaCl 0,45%/Glucose 2,5%'].median(), np.inf]
labels = [0, 1, 2]
new_d['vasop'] = pd.cut(new_d['Noradrenaline (Norepinefrine)'], bins=binsv, labels=labels)
new_d['fluid'] = pd.cut(new_d['NaCl 0,45%/Glucose 2,5%'], bins=binsf, labels=labels)
# 0 = no vasop, no fluid
# 1 = no vasop, low fluid
# 2 = no vasop, high fluid
# 3 = low vasop, no fluid
# 4 = low vasop, low fluid
# 5 = low vasop, high fluid
# 6 = high vasop, no fluid
# 7 = high vasop, low fluid
# 8 = high vasop, high fluid
act = []
for v, f in zip(new_d['vasop'], new_d['fluid']):
if v == 0 and f == 0: act.append('0')
elif v == 0 and f == 1: act.append('1')
elif v == 0 and f == 2: act.append('2')
elif v == 1 and f == 0: act.append('3')
elif v == 1 and f == 1: act.append('4')
elif v == 1 and f == 2: act.append('5')
elif v == 2 and f == 0: act.append('6')
elif v == 2 and f == 1: act.append('7')
elif v == 2 and f == 2: act.append('8')
new_d['action'] = act
new_d['reward'] = -new_d['AKI']
new_d['next'] = new_d['cluster'].shift(-1)
final = new_d.dropna()
###Output
_____no_output_____ |
notebook/Workshop_Practical_Python.ipynb | ###Markdown
Introduction to Python Author: Shaowu Pan Date: 11/16/2016
###Code
import this
###Output
_____no_output_____
###Markdown
1. *Let's start with working in a single Python file* 1.1 Basic variable int long float complex str bool
###Code
# signed integer
x = 1
type(x)
# long integer
y = 1000L
# float
z = 1.0
###Output
_____no_output_____
###Markdown
Python is case-sensetive
###Code
# string varaible: using quotes single or double is fine
X = 'good!'
X[0:4]
Y = True # false
###Output
_____no_output_____
###Markdown
Useful function print - print out printable variable
###Code
print x
print y
print X
print z
print Y
###Output
1
1000
good!
1.0
True
###Markdown
raw_input- input value from keyboard
###Code
name = raw_input("My name is ")
print name
###Output
My name is Shaowu
Shaowu
###Markdown
type - check type of unknown variable
###Code
print type(x)
print type(y)
print type(X)
print type(z)
print type(Y)
###Output
<type 'int'>
<type 'long'>
<type 'str'>
<type 'float'>
<type 'bool'>
###Markdown
[Jupyter Notebook ONLY]check method/attribute of a object- ENTER . then just PRESS 'tab' example: find the capitalize function
###Code
X.capitalize()
###Output
_____no_output_____
###Markdown
[Jupyter Notebook ONLY]find how to use this function- X.method?then PRESS ENTEREXTREMELY USEFUL when using a third party library
###Code
X.capitalize?
###Output
_____no_output_____
###Markdown
range- return a list of ordered index array- example: get a integer list [0, 1, 2]while in Matlab: 0:2
###Code
range(3)
range(0,3)
###Output
_____no_output_____
###Markdown
- demenstration 2 1.2 Compound sequence type list- designed to be flexible: dynamic array- most frequenty type in Python- can be accessed using index
###Code
a_list = [1,2,'d']
###Output
_____no_output_____
###Markdown
Popular method:- Append
###Code
aa_list = []
for i in range(10):
aa_list.append(i*i)
a_list.append(3)
a_list
###Output
_____no_output_____
###Markdown
list comprehensions - task: obtain a new list contains dype of each element in a_list
###Code
a_list
[type(element) for element in a_list]
###Output
_____no_output_____
###Markdown
tuple- everything is fixed at initialization, nothing can be changed- can be accessed using index- generate a tuple is fast than list
###Code
a_tuple=(1,2,'d')
###Output
_____no_output_____
###Markdown
set- no duplicates element- usually used when set operation involved in your algorithm- no order, so cannot access using index- can remove or add elements freely- check membership value in set is very fast
###Code
a_set = {1,2,3}
b_set = {1,2,2}
###Output
_____no_output_____
###Markdown
- set operation
###Code
a_set.union(b_set)
a_set.intersection(b_set)
###Output
_____no_output_____
###Markdown
set is faster for searching membership
###Code
hugelist = range(4000000)
hugetuple = tuple(hugelist)
hugeset = set(hugelist);
%timeit (1000000-1) in hugelist
%timeit (1000000-1) in hugetuple
%timeit (1000000-1) in hugeset
###Output
The slowest run took 39.95 times longer than the fastest. This could mean that an intermediate result is being cached.
10000000 loops, best of 3: 53.7 ns per loop
###Markdown
special note for timeit-n N, --number=N how many times to execute ‘statement’-r N, --repeat=N how many times to repeat the timer (default 3) dict- a data structure like a dictionary: key-value pair- extreme expressiveness- one key points to one value- value can be anything- key cannot be compound sequence type
###Code
a_dict={1:1, 2:2, 3:'d'}
print a_dict
# in a more verbose way...
a_dict[1] = 1
a_dict[2] = 2
a_dict[3] = 'd'
a_dict
###Output
_____no_output_____
###Markdown
Popular method:- update: merge to dict
###Code
a_dict
b_dict = {11:11}
c_dict = {1:11}
a_dict.update(b_dict)
a_dict.update(c_dict)
a_dict
###Output
_____no_output_____
###Markdown
- get(): to check if a key existed in the dict, if not return None by default - **very useful for computing a histogram**, much simpler than C++/C
###Code
# example: count the letter in this string
my_string = "I want to get the counts for each letter in this sentence"
# step 1: create an empty dictionary
counts = {}
# step 2: loop over the string
for letter in my_string:
counts[letter] = counts.get(letter, 0) + 1
print counts
a = 5
if a == 5:
print a
###Output
5
###Markdown
- break/continue
###Code
a_list = [1,2]
for a in a_list:
if a ==1:
continue
print a
a_list = [1,2]
q = 2
while q > 0:
if a ==1:
continue
print a
q = q -1
###Output
2
2
###Markdown
Very important Slicing: a:b -> contains a, up to b-1- Slicing operatior [ ] ;(while Matlab: ( ) )
###Code
print X
###Output
good!
###Markdown
- Python is 0-index
###Code
print X[0:2] # here a = 0, b = 2
# [0:2] = [0,2)
print X[:-1]
###Output
good
###Markdown
Useful function zip- create a list of tuples, each tuple is the i-th element of each argument sequence
###Code
a = [1,2]
b = ['a','b']
print zip(a,b)
print zip(a,b,b)
dict(zip(a,b))
###Output
_____no_output_____
###Markdown
- Let's see the magic of object oriented programming- This combination of python build-in function is often seen in pythonic code
###Code
dict([[1,'a'],[2,'b']])
dict(((1,'a'),(2,'b')))
dict({(1,'a'),(2,'b')})
###Output
_____no_output_____
###Markdown
Question: Will the following work?
###Code
dict(({1,'a'},{2,'b'})) # undefined behavior
###Output
_____no_output_____
###Markdown
enumerate- python for loop is not designed fro indexing number looping.
###Code
a_list
for n, item in enumerate(a_list):
print n, item
###Output
0 1
1 2
2 d
###Markdown
1.3 User-defined Function 1.3.1 classic way to define function
###Code
def square(x):
q = x**2
return q
z = square(3)
print z
###Output
9
###Markdown
1.3.2 inline function
###Code
g = lambda x: x**2
print g(3)
###Output
9
###Markdown
1.3.3 map
###Code
a_list
map(lambda x: type(x),a_list)
###Output
_____no_output_____
###Markdown
1.4 File input/output - Task: convert example.txt file into numpy array
###Code
%cat example.txt
fh=open('example.txt','rw')
data=fh.readlines();
data
l_data=[];
for line in data:
line=line.strip() # remove whitespace on head and tail
print line
print '--'
line=line.split(',') # split the string to form a list by ','
print line
l_data.append(line)
fh.close()
l_data
###Output
_____no_output_____
###Markdown
- convert list to number with int type
###Code
l_data_int=[map(lambda x: int(x),line) for line in l_data]
l_data_int
###Output
_____no_output_____
###Markdown
2. *How to work with multiple python files/functions?* 2.1 Import - This is how to the python file know where to search for another python file- Python code in one module gains access to the code in another module by the process of importing it- module = file consisting of Python code example:1 import whole file- import moduleName- import moduleName as mN2 import only one function in the file- from moduleName import subFunction_1 Popurlar modules- numpy- matplotlib- scipy- pandas- sklearn- sys- ... Numpy- basic type: "numpy.ndarray" - for matlab users: this subclass is more friendly: "numpy.matrixlib.defmatrix.matrix"- multidimensional array of the same type
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
- usually create numpy array by transforming list to ndarray
###Code
b = np.array([1,2,3])
np.arange(15)
a = np.arange(15).reshape(3, 5)
print a
b.shape
a.shape
a[0]
a[0][0]
a.ndim
a.shape
a.dtype
type(a)
###Output
_____no_output_____
###Markdown
- condition slicing: VERY POWERFUL
###Code
a
print a>3
print a<11
(a>3)&(a<11)
c=a[(a>3)&(a<11)] # or (a>3)*(a<11)
print c
print c.shape
###Output
[ 4 5 6 7 8 9 10]
(7,)
###Markdown
- summation to reduce the matrix to 1D array along i-th axis- 0: row-wise- 1: column-wise
###Code
a
a.sum(axis=0)
###Output
_____no_output_____
###Markdown
- attention: like C/C++, sqrt and exp, math functions are all not built-in, but imported
###Code
np.exp(3)
###Output
_____no_output_____
###Markdown
- ATTENTION TO MATLAB USERS - using space to divide element (e.g., [1 2 3]) works in matlab but do not work here - operator * in ndarray is element-wise product and produce the ndarray in the largest dimension of the two - ref: https://docs.scipy.org/doc/numpy-dev/user/numpy-for-matlab-users.html
###Code
a=np.array([1,2,3])
b=np.array([4,5,6])
print a*b
print '--'
print a.T*b
print '--'
print a.reshape(1,3)
print '--'
print a.reshape(3,1)*b
# when .reshape, the dimensionality is altered
print 'number of [] means the dimensions'
print a.reshape(1,3)
print '--'
print b.reshape(3,1)
###Output
number of [] means the dimensions
[[1 2 3]]
--
[[4]
[5]
[6]]
###Markdown
in ndarray: * is element-wise product
###Code
a.reshape(3,1)*b.reshape(1,3)
# looks like two vector spans a matrix, but it is not
print a.reshape(1,3)*b.reshape(3,1)
print 'do not return dot product'
a.reshape(3,1)*b.reshape(3,1)
a.reshape(1,3)*b.reshape(1,3)
###Output
_____no_output_____
###Markdown
to solve the pain for ndarray for matrix production, one can look for matrix subclass
###Code
## attentation..change mb to ma
ma = np.matrix(a)
ma*ma.T
ma = np.mat(a)
ma
ma*ma.transpose()
###Output
_____no_output_____
###Markdown
Matplotlib
###Code
from matplotlib import pyplot as plt
%matplotlib inline
### add this line!!!
a=np.arange(4).reshape(2,2)
plt.figure(figsize=(12,8))
plt.plot(a[0],a[1])
plt.title('example figure')
plt.xlabel('x')
plt.ylabel('y')
plt.savefig('./example.png')
###Output
_____no_output_____
###Markdown
3. *How to debug?* pdb- import pdb- pdb.set_trace() useful pdb command:- n: nextline- c: continue to end- l: show current line in the code- r: jump to return- s: step into- p: print - !python command - change the python code on the fly embedded detection in your code- try + exception - An exception is a Python object that represents an error. - common exception - https://www.tutorialspoint.com/python/python_exceptions.htm sanity-check- assert try + exception - example: ZeroDivisionError situation: doubt element in b could be zero and it leads to error
###Code
a = 1.0
b = [0, 0.1 ,0.2, 0.3, 0, 32, 0, 3.4]
for b_iter in b:
try:
print a/b_iter
except ZeroDivisionError:
print '--'
print 'Divide Zero...Find..'
print '--'
###Output
--
Divide Zero...Find..
--
10.0
5.0
3.33333333333
--
Divide Zero...Find..
--
0.03125
--
Divide Zero...Find..
--
0.294117647059
###Markdown
assert
###Code
for b_iter in b:
assert b_iter != 0
###Output
_____no_output_____
###Markdown
4. *How to profiling the code?* - time profiling- memory profiling example code
###Code
%cat ../code/speed_profile.py
###Output
#@profile
def main():
num = 50000000
s=0;
for i in range(num):
s = s + i
return
def sub():
num = 50000000
s = 0
for i in range(num):
s = s+i
return
main()
sub()
###Markdown
time profiling cProfile
###Code
#python -m cProfile speed_profile.py
%run -m cProfile ../code/speed_profile.py
###Output
6 function calls in 5.494 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 5.494 5.494 speed_profile.py:2(<module>)
1 2.017 2.017 2.731 2.731 speed_profile.py:2(main)
1 2.079 2.079 2.764 2.764 speed_profile.py:9(sub)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
2 1.398 0.699 1.398 0.699 {range}
###Markdown
line proflier and kernprof
###Code
# pip install line_profiler
###Output
_____no_output_____
###Markdown
kernprof -l -v kern_speed_profile.py Wrote profile results to kern_speed_profile.py.lprofTimer unit: 1e-06 sTotal time: 26.9222 sFile: kern_speed_profile.pyFunction: main at line 1Line Hits Time Per Hit % Time Line Contents============================================================== 1 @profile 2 def main(): 3 1 1 1.0 0.0 num = 50000000 4 1 1 1.0 0.0 s=0; 5 50000001 13561380 0.3 50.4 for i in range(num): 6 50000000 13360858 0.3 49.6 s = s + i 7 1 3 3.0 0.0 returnTotal time: 27.0697 sFile: kern_speed_profile.pyFunction: sub at line 8Line Hits Time Per Hit % Time Line Contents============================================================== 8 @profile 9 def sub(): 10 1 0 0.0 0.0 num = 50000000 11 1 0 0.0 0.0 s = 0 12 50000001 13581265 0.3 50.2 for i in range(num): 13 50000000 13488440 0.3 49.8 s = s+i 14 1 4 4.0 0.0 return optimization- replace range with xrange
###Code
%cat ../code/opti_speed_profile.py
%run -m cProfile ../code/opti_speed_profile.py
###Output
4 function calls in 2.980 seconds
Ordered by: standard name
ncalls tottime percall cumtime percall filename:lineno(function)
1 0.000 0.000 2.980 2.980 opti_speed_profile.py:2(<module>)
1 1.514 1.514 1.514 1.514 opti_speed_profile.py:2(main)
1 1.466 1.466 1.466 1.466 opti_speed_profile.py:9(sub)
1 0.000 0.000 0.000 0.000 {method 'disable' of '_lsprof.Profiler' objects}
###Markdown
kernprof -l -v ../code/opti_kern_speed_profile.py
###Code
Wrote profile results to opti_kern_speed_profile.py.lprof
Timer unit: 1e-06 s
Total time: 31.2482 s
File: opti_kern_speed_profile.py
Function: main at line 1
Line # Hits Time Per Hit % Time Line Contents
#==============================================================
1 @profile
2 def main():
3 1 1 1.0 0.0 num = 50000000
4 1 1 1.0 0.0 s=0;
5 50000001 15159694 0.3 48.5 for i in xrange(num):
6 50000000 16088469 0.3 51.5 s = s + i
7 1 1 1.0 0.0 return
Total time: 32.3526 s
File: opti_kern_speed_profile.py
Function: sub at line 8
Line # Hits Time Per Hit % Time Line Contents
#==============================================================
8 @profile
9 def sub():
10 1 1 1.0 0.0 num = 50000000
11 1 1 1.0 0.0 s = 0
12 50000001 15705699 0.3 48.5 for i in xrange(num):
13 50000000 16646855 0.3 51.5 s = s+i
14 1 1 1.0 0.0 return
###Output
_____no_output_____
###Markdown
explanation - range vs xrange range creates a list, so if you do range(1, 10000000) it creates a list in memory with 9999999 elements. xrange is a sequence object that evaluates lazily. memory profiling
###Code
# pip install memory_profiler
#python -m memory_profiler ../code/mem_profiling.py
Filename: mem_profiling.py
Line # Mem usage Increment Line Contents
================================================
1 30.965 MiB 0.000 MiB @profile
2 def main():
3 30.965 MiB 0.000 MiB num = 500000
4 30.965 MiB 0.000 MiB s=0;
5 46.367 MiB 15.402 MiB for i in range(num):
6 46.367 MiB 0.000 MiB s = s + i
7 42.734 MiB -3.633 MiB return
#python -m memory_profiler ../code/opti_mem_profiling.py
Filename: opti_mem_profiling.py
Line # Mem usage Increment Line Contents
================================================
1 30.848 MiB 0.000 MiB @profile
2 def main():
3 30.848 MiB 0.000 MiB num = 500000
4 30.848 MiB 0.000 MiB s=0;
5 30.848 MiB 0.000 MiB for i in xrange(num):
6 30.848 MiB 0.000 MiB s = s + i
7 30.848 MiB 0.000 MiB return
###Output
_____no_output_____ |
Model_Building/Collaborative_Filtering_ML_ALS_vs_BigDL_NCF_20m.ipynb | ###Markdown
Notebook for Collaborative Filtering with both ALS and NCF models for 20M rows In this notebook, we implement ALS and NCF models for Movie Recommendation System for 1M movie ratings. The 20M reviews dataset contains 20 million reviews made by 138,000 users on 27,000 movies.
###Code
# Intialization
import os
import time
import datetime as dt
import warnings
warnings.filterwarnings("ignore", message="numpy.dtype size changed")
# spark sql imports
from pyspark.sql import SparkSession, SQLContext, Row
from pyspark.sql.functions import UserDefinedFunction, explode, desc, rank, col, row_number
from pyspark.sql.types import *
from pyspark.sql.window import Window
# spark ml imports
from pyspark.ml.recommendation import ALS, ALSModel
from pyspark.ml.linalg import Vectors
from pyspark.ml.tuning import ParamGridBuilder, CrossValidator
from pyspark.ml.evaluation import RegressionEvaluator
# spark bigdl, analytics zoo imports
from zoo.models.recommendation import UserItemFeature
from zoo.models.recommendation import NeuralCF
from zoo.common.nncontext import init_nncontext
from bigdl.nn.criterion import *
from bigdl.optim.optimizer import *
from bigdl.util.common import *
# data science imports
import math
import numpy as np
import pandas as pd
from sklearn import metrics
from operator import itemgetter
data_path = 'hdfs:///user/andrew/'
sc = init_nncontext("NCF Example")
###Output
_____no_output_____
###Markdown
Data Preparation
###Code
# Initialize the SQLContext for reading in parquet files as Spark dataframes
sqlContext = SQLContext(sc)
# Load in the ratings data and format such that it has 3 columns - userId, movieId, rating
# The ratings data will be used for modeling and making recommendations
ratings = sqlContext.read.parquet(data_path + 'ratings_20m')
ratings = ratings.drop('timestamp')
ratings = ratings.withColumn("userId", ratings["userId"].cast("int"))
ratings = ratings.withColumn("rating", ratings["rating"] * 2) #Multiply by 2 so that values are whole numbers -> values 1 to 10
# Load in the movies data and format such that it contains 3 columns - movieId, title, genres
# The movies data will be used in the final step to understand what items have been recommended
movies = sqlContext.read.parquet(data_path + 'movies_20m')
movies = movies.drop('imdbId')
ratings.show(5)
movies.show(5)
ratings_train, ratings_val = ratings.randomSplit([0.8, 0.2], seed = 42)
print('The random split results in %s reviews in the training dataset and %s reviews in the validation dataset.'
% (ratings_train.count(), ratings_val.count()))
ratings_train.take(3)
# Format the training and validation datasets into RDDs of Sample. This is the distributed format
# used in Analytics Zoo and BigDL to speed up processing time.
def build_sample(user_id, item_id, rating):
sample = Sample.from_ndarray(np.array([user_id, item_id]), np.array([rating]))
return UserItemFeature(user_id, item_id, sample)
fullPairFeatureRdds = ratings.rdd.map(lambda x: build_sample(x[0], x[1], x[2]))
trainPairFeatureRdds = ratings_train.rdd.map(lambda x: build_sample(x[0], x[1], x[2]))
valPairFeatureRdds = ratings_val.rdd.map(lambda x: build_sample(x[0], x[1], x[2]))
full_rdd = fullPairFeatureRdds.map(lambda pair_feature: pair_feature.sample)
train_rdd = trainPairFeatureRdds.map(lambda pair_feature: pair_feature.sample)
val_rdd = valPairFeatureRdds.map(lambda pair_feature: pair_feature.sample)
# Visualize the first three rows of the training data to better understand what a RDD of Sample looks like.
train_rdd.take(3)
###Output
_____no_output_____
###Markdown
ALS and NCF Model Training and Validation on Training dataTrain ALS and NCF models and compare the Mean Absolte Error (MAE) for each on the validation set. With the parameter settings set below, the ALS model has slightly lower validation error, but also takes far less time to train. However, when comparing the training and validation error for each model, the ALS model is more over fit.
###Code
%%time
als = ALS(seed = 42, regParam = 0.1, maxIter = 15, rank = 12,
userCol = "userId", itemCol = "movieId", ratingCol = "rating")
evaluator = RegressionEvaluator(metricName="mae", labelCol="rating",
predictionCol="prediction")
als_model = als.fit(ratings_train)
%%time
print 'Training Error (MAE):', evaluator.evaluate(als_model.transform(ratings_train))
print 'Validation Error (MAE):', evaluator.evaluate(als_model.transform(ratings_val).fillna(0))
# Save ALS model (trained on all 20M reviews)
als_model.write().overwrite().save(path = data_path + 'ALS_Model_test.h5')
als_model_test = ALSModel.load(path = data_path + 'ALS_Model_test.h5')
print 'Training Error (MAE):', evaluator.evaluate(als_model_test.transform(ratings_train))
print 'Validation Error (MAE):', evaluator.evaluate(als_model_test.transform(ratings_val).fillna(0))
%%time
batch_size = 92160
max_user_id = ratings.agg({'userId': 'max'}).collect()[0]['max(userId)']
max_movie_id = ratings.agg({'movieId': 'max'}).collect()[0]['max(movieId)']
ncf = NeuralCF(user_count = max_user_id, item_count = max_movie_id,
class_num = 10, hidden_layers = [20, 10], include_mf = False)
optimizer = Optimizer(
model=ncf,
training_rdd=train_rdd,
criterion=ClassNLLCriterion(),
end_trigger=MaxEpoch(10),
batch_size=batch_size, # 16 executors, 16 cores each
optim_method=Adam(learningrate=0.001))
optimizer.set_validation(
batch_size=batch_size, # 16 executors, 16 cores each
val_rdd=val_rdd,
trigger=EveryEpoch(),
val_method=[MAE(), Loss(ClassNLLCriterion())]
)
optimizer.optimize()
%%time
train_res = ncf.evaluate(train_rdd, batch_size, [MAE()])
val_res = ncf.evaluate(val_rdd, batch_size, [MAE()])
print 'Training Error (MAE):', train_res[0]
print 'Validation Error (MAE):', val_res[0]
# Save NCF model (trained on all 20M reviews)
ncf.save_model(path = data_path + 'NCF_Model_test.bigdl',
weight_path = data_path + 'NCF_Model_test_weights.bin',
over_write = True)
# Load NCF model - compare loaded model results to trained model results
ncf_test = NeuralCF.load_model(path = data_path + 'NCF_Model_test.bigdl',
weight_path = data_path + 'NCF_Model_test_weights.bin')
train_res = ncf_test.evaluate(train_rdd, batch_size, [MAE()])
val_res = ncf_test.evaluate(val_rdd, batch_size, [MAE()])
print 'Training Error (MAE):', train_res[0]
print 'Validation Error (MAE):', val_res[0]
###Output
creating: createMAE
creating: createMAE
Training Error (MAE): Evaluated result: 1.23713171482, total_num: 44580, method: MAE
Validation Error (MAE): Evaluated result: 1.27953600883, total_num: 11238, method: MAE
###Markdown
ALS and NCF Model Training and Validation on the entire dataset
###Code
%%time
als = ALS(seed = 42, regParam = 0.1, maxIter = 15, rank = 12, # coldStartStrategy = 'drop', # drops userIds/movieIds from the validation set or test set so that NaNs are not returned
userCol = "userId", itemCol = "movieId", ratingCol = "rating")
evaluator = RegressionEvaluator(metricName="mae", labelCol="rating",
predictionCol="prediction")
als_model = als.fit(ratings)
print 'Model Error (MAE):', evaluator.evaluate(als_model.transform(ratings))
# Save ALS model (trained on all 20M reviews)
als_model.write().overwrite().save(path = data_path + 'ALS_Model_20m.h5')
%%time
max_user_id = ratings.agg({'userId': 'max'}).collect()[0]['max(userId)']
max_movie_id = ratings.agg({'movieId': 'max'}).collect()[0]['max(movieId)']
ncf = NeuralCF(user_count=max_user_id, item_count=max_movie_id, class_num=10, hidden_layers=[20, 10], include_mf = False)
optimizer = Optimizer(
model=ncf,
training_rdd=full_rdd,
criterion=ClassNLLCriterion(),
end_trigger=MaxEpoch(10),
batch_size=batch_size, # 16 executors, 16 cores each
optim_method=Adam(learningrate=0.001))
optimizer.optimize()
full_res = ncf.evaluate(full_rdd, batch_size, [MAE()])
print 'Model Error (MAE):', full_res[0]
# Save NCF model (trained on all 20M reviews)
ncf.save_model(path = data_path + 'NCF_Model_20m.bigdl',
weight_path = data_path + 'NCF_Model_20m_weights.bin',
over_write = True)
###Output
_____no_output_____
###Markdown
Predictions Comparison Compare the prediction between ALS and NCF for one specific user. The user id is specified in the final two cells
###Code
%%time
# Create a sparse matrix of all combinations of items
ratings_df = ratings.toPandas()
ratings_matrix = ratings_df.pivot(index='userId',columns='movieId',values='rating').fillna(0)
# Melt sparse matrix to dataframe of 3 columns containing userId, movieId, and rating
ratings_matrix['userId'] = ratings_matrix.index
ratings_df_2 = pd.melt(ratings_matrix, id_vars = ['userId'], value_vars = list(ratings_matrix.columns).remove('userId'))
ratings_df_2.columns = ['userId', 'movieId', 'rating']
ratings_df_2.shape
%%time
# Predict for specified user
pred_userId = 25643
# keep only the userId, movieId pairs that do not have ratings
ratings_blanks_df = ratings_df_2.iloc[np.where((ratings_df_2.rating == 0)
& (ratings_df_2.userId == pred_userId))]
# Convert to spark dataframe
ratings_blanks = sqlContext.createDataFrame(ratings_blanks_df)
# Create RDD of Sample from the spark dataframe
blankPairFeatureRdds = ratings_blanks.rdd.map(lambda x: build_sample(x[0], x[1], x[2]))
%%time
als_pair_preds = als_model.transform(ratings_blanks)
ncf_pair_preds = ncf.recommend_for_user(blankPairFeatureRdds, 10).toDF()
als_preds = als_pair_preds.select('userId', 'movieId', 'prediction').toDF('userId', 'movieId', 'als_pred')
ncf_preds_topN = ncf_pair_preds.select('user_id', 'item_id', 'prediction').toDF('userId', 'movieId', 'ncf_pred')
del als_pair_preds, ncf_pair_preds
%%time
window = Window.partitionBy(als_preds['userId']).orderBy(als_preds['als_pred'].desc())
als_preds_topN = als_preds.select(col('*'), row_number().over(window).alias('row_number')).where(col('row_number') <= 10)
als_preds_topN_labeled = als_preds_topN.join(movies, how = 'left', on = 'movieId')
ncf_preds_topN_labeled = ncf_preds_topN.join(movies, how = 'left', on = 'movieId')
als_final = als_preds_topN_labeled.select('userId', 'movieId', 'als_pred', 'title').sort(col("userId")).toPandas()
ncf_final = ncf_preds_topN_labeled.select('userId', 'movieId', 'ncf_pred', 'title').sort(col("userId")).toPandas()
del window, als_preds, als_preds_topN, ncf_preds_topN, als_preds_topN_labeled, ncf_preds_topN_labeled
als_final
ncf_final
###Output
_____no_output_____ |
2016/tutorial_final/3/PyAlgoTrade-checkpoint.ipynb | ###Markdown
PyAlgoTradeIntroduction PyAlgoTrade is a Python Algorithmic Trading Library mainly used to backtest any user devised strategy. It also provides support for paper trading and live trading. Quantitative estimation of the accuracy of new trading algorithms on historical market data is the prime use of PyAlgoTrade. In this tutorial we will learn how to use pyalgotrade, create a simple user strategy, understand the features using a simple strategy and explore features to backtest and analyse the same strategy.Getting PyalgoTrade You can install PyAlgoTrade using pip like this:
###Code
pip install pyalgotrade
###Output
_____no_output_____
###Markdown
Main features1. Event driven.2. Supports Market, Limit, Stop and StopLimit orders. These are essentially when we would sell and buy.3. Supports Yahoo! Finance, Google Finance and NinjaTrader CSV files. These are the market feed data.4. Bitcoin trading support through Bitstamp.5. Technical indicators and filters like SMA, WMA, EMA, RSI, Bollinger Bands, Hurst exponent and others.6. Performance metrics like Sharpe ratio, trade and drawdown analysis.7. Handling Twitter events in realtime.8. Event profiler.PyAlgoTrade has 6 main components:StrategiesThese are classes which contain the business logic of the trading algorithm: buying time, selling time etc...FeedsThese are data providing abstractions. It can be a CSV feed that loads bars to a strategy or a Twitter feed that allows incorporating Twitter events into trading decisions. This data is the one on which the business logic is written.BrokersThis is the executing section which carries out the orders.DataSeriesA data series is an abstraction used to manage time series data.TechnicalsThese are a set of filters that you use to make calculations on top of DataSeries. For example SMA (Simple Moving Average), RSI (Relative Strength Index), etc. These filters are modeled as DataSeries decorators.OptimizerThese are a set of classes that allow you to distribute backtesting among different computers, or different processes running in the same computer, or a combination of both. They make horizontal scaling easy. Getting DataThe first thing that we’ll need to test our strategies is some data. Let’s use Mirosoft’s stock prices for year 2000, which we’ll download with the following command:
###Code
from pyalgotrade.tools import yahoofinance
yahoofinance.download_daily_bars('msft', 2000, 'msft-2000.csv')
The pyalgotrade.tools.yahoofinance package downloads CSV formatted data from Yahoo! Finance. The msft-2000.csv file should look like this:
Date,Open,High,Low,Close,Volume,Adj Close
2000-12-29,30.87,31.31,28.69,29.06,31655500,28.35
2000-12-28,30.56,31.12,30.37,31.06,25055600,30.30
2000-12-27,30.37,31.06,29.37,30.69,26441700,29.94
.
.
2000-01-04,115.50,118.62,105.00,107.69,116850000,26.26
2000-01-03,124.62,125.19,111.62,118.12,98122000,28.81
###Output
_____no_output_____
###Markdown
Creating first strategyPresented below is an illustration of the simple moving average algorithm. This solves the purpose of understanding the architecture and flow of the PyAlgoTrade module. PyAlgoTrade provides us with a skeleton of a base strategy in the form of BaseStrategy class. The class has the following format:class pyalgotrade.strategy.BaseStrategy(barFeed, broker)Parameters: barFeed (pyalgotrade.barfeed.BaseBarFeed.) – The bar feed that will supply the bars.broker (pyalgotrade.broker.Broker.) – The broker that will handle orders.Methods:onBars(bars)Override (mandatory) to get notified when new bars are available. The default implementation raises an Exception.This is the method to override to enter your trading logic and enter/exit positions.Parameters: bars (pyalgotrade.bar.Bars.) – The current bars.run()Call once (and only once) to run the strategy.stop()Stops a running strategy.onStart()Override (optional) to get notified when the strategy starts executing. The default implementation is empty.onFinish(bars)Override (optional) to get notified when the strategy finished executing. The default implementation is empty.Parameters: bars (pyalgotrade.bar.Bars.) – The last bars processed.These are the basic functions in the BaseStrategy class which contains a bunch of functions, which we will capture as we go ahead. There is also a class called BacktestStrategy explicitly inherited from BaseStrategy and useful in backtesting a trading logic.
###Code
from pyalgotrade import strategy
from pyalgotrade.barfeed import yahoofeed
from pyalgotrade.technical import ma
class FirstStrategy(strategy.BacktestingStrategy):
def __init__(self, feed, instrument):
super(FirstStrategy, self).__init__(feed)
self.__sma = ma.SMA(feed[instrument].getCloseDataSeries(), 10)
self.__instrument = instrument
def onBars(self, bars):
bar = bars[self.__instrument]
self.info("%s %s" % (bar.getClose(), self.__sma[-1]))
###Output
_____no_output_____
###Markdown
To test our strategy let us load the data for Microsoft and check the output.
###Code
feed = yahoofeed.Feed()
feed.addBarsFromCSV("msft", "msft-2000.csv")
myStrategy = FirstStrategy(feed, "msft")
myStrategy.run()
###Output
_____no_output_____
###Markdown
In the above code we declare a new strategy. We need to override the onBars callback which gets fired whenever a feed is available. We load the feeds from a csv file and then run it with our strategy which for now just prints the slosing prices and simple moving averages.To get a brief insight into the short and long positions of trading, the different types of orders which can be placed, here is a good start. This will help us understand the APIs better.http://www.investopedia.com/ask/answers/100314/whats-difference-between-long-and-short-position-market.asphttp://www.investopedia.com/university/intro-to-order-types/limit-orders.aspLet’s move on with a simple strategy, this time simulating actual trading. The idea is very simple:If the adjusted close price is above the SMA() for the given period we enter a long position (we place a buy market order). If a long position is in place, and the adjusted close price drops below the SMA() we exit the long position (we place a sell market order). We are defining the rules of trading, our buying and selling strategies based on SMA values in this part of the tutorial. This is just an illustration, ofcourse there are trading algorithms which make use of much more fancy parameters and do a bunch of computations before deciding.
###Code
from pyalgotrade import strategy
from pyalgotrade.barfeed import yahoofeed
from pyalgotrade.technical import ma
class FirstStrategy(strategy.BacktestingStrategy):
def __init__(self, feed, instrument, smaPeriod):
super(FirstStrategy, self).__init__(feed, 1000)
self.__position = None
self.__instrument = instrument
self.setUseAdjustedValues(True)
self.__sma = ma.SMA(feed[instrument].getPriceDataSeries(), smaPeriod)
def getSMA(self):
return self.__sma
def onEnterOk(self, position):
execInfo = position.getEntryOrder().getExecutionInfo()
self.info("BUY at $%.2f" % (execInfo.getPrice()))
def onEnterCanceled(self, position):
self.__position = None
def onExitOk(self, position):
execInfo = position.getExitOrder().getExecutionInfo()
self.info("SELL at $%.2f" % (execInfo.getPrice()))
self.__position = None
def onExitCanceled(self, position):
self.__position.exitMarket()
def onBars(self, bars):
if self.__sma[-1] is None:
return
bar = bars[self.__instrument]
if self.__position is None:
if bar.getPrice() > self.__sma[-1]:
self.__position = self.enterLong(self.__instrument, 10, True)
elif bar.getPrice() < self.__sma[-1] and not self.__position.exitActive():
self.__position.exitMarket()
def run_strategy(feed, instrument, smaPeriod):
first_strategy = FirstStrategy(feed, instrument, smaPeriod)
first_strategy.run()
print "Final value: $%.2f" % first_strategy.getBroker().getEquity()
return first_strategy
feed = yahoofeed.Feed()
feed.addBarsFromCSV("msft", "msft-2000.csv")
first_strategy = run_strategy(feed, "msft", 10)
###Output
2000-01-18 00:00:00 strategy [INFO] BUY at $38.22
2000-01-20 00:00:00 strategy [INFO] SELL at $36.59
2000-02-02 00:00:00 strategy [INFO] BUY at $35.01
2000-02-03 00:00:00 strategy [INFO] SELL at $34.88
2000-02-04 00:00:00 strategy [INFO] BUY at $35.67
2000-02-14 00:00:00 strategy [INFO] SELL at $34.60
2000-03-06 00:00:00 strategy [INFO] BUY at $32.81
2000-03-07 00:00:00 strategy [INFO] SELL at $32.86
2000-03-08 00:00:00 strategy [INFO] BUY at $32.06
2000-03-15 00:00:00 strategy [INFO] SELL at $32.32
2000-03-20 00:00:00 strategy [INFO] BUY at $33.75
2000-03-31 00:00:00 strategy [INFO] SELL at $36.23
2000-04-03 00:00:00 strategy [INFO] BUY at $32.28
2000-04-04 00:00:00 strategy [INFO] SELL at $31.30
2000-05-02 00:00:00 strategy [INFO] BUY at $24.89
2000-05-03 00:00:00 strategy [INFO] SELL at $24.05
2000-05-08 00:00:00 strategy [INFO] BUY at $24.25
2000-05-09 00:00:00 strategy [INFO] SELL at $23.99
2000-05-16 00:00:00 strategy [INFO] BUY at $23.78
2000-05-18 00:00:00 strategy [INFO] SELL at $23.26
2000-06-02 00:00:00 strategy [INFO] BUY at $22.56
2000-06-30 00:00:00 strategy [INFO] SELL at $26.34
2000-07-03 00:00:00 strategy [INFO] BUY at $27.24
2000-07-06 00:00:00 strategy [INFO] SELL at $26.96
2000-07-07 00:00:00 strategy [INFO] BUY at $27.78
2000-07-11 00:00:00 strategy [INFO] SELL at $26.94
2000-07-13 00:00:00 strategy [INFO] BUY at $26.94
2000-07-17 00:00:00 strategy [INFO] SELL at $26.75
2000-08-04 00:00:00 strategy [INFO] BUY at $23.73
2000-08-07 00:00:00 strategy [INFO] SELL at $23.99
2000-08-08 00:00:00 strategy [INFO] BUY at $23.95
2000-08-17 00:00:00 strategy [INFO] SELL at $24.31
2000-08-29 00:00:00 strategy [INFO] BUY at $24.33
2000-08-30 00:00:00 strategy [INFO] SELL at $24.16
2000-10-20 00:00:00 strategy [INFO] BUY at $20.96
2000-11-13 00:00:00 strategy [INFO] SELL at $22.79
2000-11-16 00:00:00 strategy [INFO] BUY at $23.73
2000-11-17 00:00:00 strategy [INFO] SELL at $23.73
2000-11-27 00:00:00 strategy [INFO] BUY at $24.42
2000-11-29 00:00:00 strategy [INFO] SELL at $22.84
2000-12-13 00:00:00 strategy [INFO] BUY at $20.68
2000-12-15 00:00:00 strategy [INFO] SELL at $17.45
###Markdown
Technicals: Technicals will return None when the value can’t be calculated at a given time.Technicals can be cascaded. That is because they’re modeled as DataSeries as well. An example below combines RSI and SMA filters. These are parameters which will be used in dedcision making.
###Code
def __init__(self, feed, instrument):
super(MyStrategy, self).__init__(feed)
self.__rsi = rsi.RSI(feed[instrument].getCloseDataSeries(), 14)
self.__sma = ma.SMA(self.__rsi, 15)
self.__instrument = instrument
###Output
_____no_output_____
###Markdown
OptimizationTrading algorithm are computationally intensive considering the volume of data they work on. More importantly they need to be very fast in procecssing the data and giving out results. Also depending on the strategy we choose, and the parameters we use for the strategy there would be enormous possibilities. We would want to do processing of the entire data on all these possibilties. This is when we think of parallel execution. Fortunately, Pyalgotrade has an option to parallelize our algorithm by setting up a server, which manages the intense computation by distributing it across multiple workers. The server is configured to test a strategy for different set of parameter combinations (of the order of 10^6). It waits for worker processes to subscribe for some load. Many or one workers (other machines) can subscribe to the server, which assigns a part of the computation (say some subset of parameter range). Once the workers have completed computation, they share their results with the server which aggregates them and filters out the best combination for the chosen strategy.
###Code
from pyalgotrade.tools import yahoofinance
yahoofinance.download_daily_bars('msft', 2009, 'msft-2009.csv')
yahoofinance.download_daily_bars('msft', 2010, 'msft-2010.csv')
yahoofinance.download_daily_bars('msft', 2011, 'msft-2011.csv')
import itertools
from pyalgotrade.technical import ma
from pyalgotrade.optimizer import server
from pyalgotrade.technical import rsi
def parameters_generator():
instrument = "msft"
rsiPeriod = range(2, 11)
entrySMA = range(150, 251)
exitSMA = range(5, 16)
return itertools.product(instrument, entrySMA, exitSMA, rsiPeriod)
feed = yahoofeed.Feed()
feed.addBarsFromCSV("msft", "msft-2009.csv")
feed.addBarsFromCSV("msft", "msft-2010.csv")
feed.addBarsFromCSV("msft", "msft-2011.csv")
server.serve(feed, parameters_generator(), 'localhost', 5000)
###Output
_____no_output_____
###Markdown
The above sections of code downloads necessary files for 3 years worth of data and configures a server to generate 100x10x10 = 10000 possible configurations to be tested on that data. It waits on the port 5000 for active workers which request a subset of these parameters for a strategy they would be testing.
###Code
from pyalgotrade.optimizer import worker
worker.run(FooStrategy, 'localhost', 5000) #FooStrategy is just a place holder
###Output
_____no_output_____
###Markdown
The above code registers the worker with the above created server for some strategy called 'FooStrategy'. Analyzing a strategyStrategy analyzers provide an extensible way to attach different calculations to strategy executions. It surfaces routines to extract profit/loss statements, commissions, evaluate returns using which we could converge to optimal levels.Different investors use moving averages for different reasons. Some use them as their primary analytical tool, while others simply use them as a confidence builder to back up their investment decisions. A crossover is the most basic type of signal and is favored among many traders because it removes all emotion. The most basic type of crossover is when the price of an asset moves from one side of a moving average and closes on the other. Price crossovers are used by traders to identify shifts in momentum and can be used as a basic entry or exit strategy.
###Code
from pyalgotrade import strategy
from pyalgotrade.technical import ma
from pyalgotrade.technical import cross
class SMACrossOver(strategy.BacktestingStrategy):
def __init__(self, feed, instrument, smaPeriod):
super(SMACrossOver, self).__init__(feed)
self.__instrument = instrument
self.__position = None
self.setUseAdjustedValues(True)
self.__prices = feed[instrument].getPriceDataSeries()
self.__sma = ma.SMA(self.__prices, smaPeriod)
def getSMA(self):
return self.__sma
def onEnterCanceled(self, position):
self.__position = None
def onExitOk(self, position):
self.__position = None
def onExitCanceled(self, position):
self.__position.exitMarket()
def onBars(self, bars):
if self.__position is None:
if cross.cross_above(self.__prices, self.__sma) > 0:
shares = int(self.getBroker().getCash() * 0.75 / bars[self.__instrument].getPrice()) #compute the number of shares with which you would want to enter a long position
self.__position = self.enterLong(self.__instrument, shares, True)
elif not self.__position.exitActive() and cross.cross_below(self.__prices, self.__sma) > 0:
self.__position.exitMarket()
from pyalgotrade.barfeed import yahoofeed
from pyalgotrade.stratanalyzer import returns
from pyalgotrade.stratanalyzer import sharpe
from pyalgotrade.stratanalyzer import drawdown
from pyalgotrade.stratanalyzer import trades
feed = yahoofeed.Feed()
feed.addBarsFromCSV("msft", "msft-2000.csv")
myStrategy = SMACrossOver(feed, "msft", 20)
retAnalyzer = returns.Returns()
myStrategy.attachAnalyzer(retAnalyzer)
drawDownAnalyzer = drawdown.DrawDown()
myStrategy.attachAnalyzer(drawDownAnalyzer)
tradesAnalyzer = trades.Trades()
myStrategy.attachAnalyzer(tradesAnalyzer)
myStrategy.run()
print "Final portfolio value: " + str(myStrategy.getResult())
print "Cumulative returns: " + str(retAnalyzer.getCumulativeReturns()[-1] * 100)
print "Longest drawdown duration: " + str((drawDownAnalyzer.getLongestDrawDownDuration()))
###Output
_____no_output_____
###Markdown
VisualizationThe pyalgotrader also provides a plotter to capture the changes in any kind of matrices. When the below code is run, a plotter window pops up with the requested graphs generated. It also has options to zoom-in, copy etc.. for a thorough analysis.
###Code
from pyalgotrade import plotter
from pyalgotrade.stratanalyzer import returns
returnsAnalyzer = returns.Returns()
first_strategy.run()
first_strategy.info("Final portfolio value: $%.2f" % first_strategy.getResult())
first_strategy.attachAnalyzer(returnsAnalyzer)
plt = plotter.StrategyPlotter(first_strategy)
plt.getInstrumentSubplot("msft").addDataSeries("SMA", first_strategy.getSMA())
plt.getOrCreateSubplot("returns").addDataSeries("Simple returns", returnsAnalyzer.getReturns())
plt.plot()
###Output
_____no_output_____
###Markdown
Please note that the visualizer is still in the raw form. Since this library is new and undergoing changes there might be chances when the visualizer does not pick up data. The work around code for that is very straighforward. We can just use the SMA values from first_strategy.getSMA() and use matpoltlib to generate our own visualizations. The graph might come as a pop up window please check for that.
###Code
import matplotlib.pyplot as plt
smas = []
for sma in first_strategy.getSMA():
if sma!=None:
smas.append(sma)
plt.hist(smas, 50, normed=1, facecolor='green', alpha=0.75)
plt.xlabel('sma values')
plt.show()
###Output
_____no_output_____ |
resources/useful_repos/ORIGINAL_intuitive-deep-learning-master/Part 2: Image Recognition CIFAR-10/Coding Companion to Intuitive Deep Learning Part 2 (Annotated).ipynb | ###Markdown
Coding Companion for Intuitive Deep Learning Part 2 (Annotated) The medium post for this notebook is [here](https://medium.com/@josephleeweien/build-your-first-convolutional-neural-network-to-recognize-images-84b9c78fe0ce).In this notebook, we'll go through the code for the coding companion for [Intuitive Deep Learning Part 2](https://medium.com/intuitive-deep-learning/intuitive-deep-learning-part-2-cnns-for-computer-vision-24992d050a27) to create your very first Convolutional neural network to predict what is contained within the image (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck). We will go through the following in this notebook:- Exploring and Processing the Data- Building and Training our Convolutional Neural Network- Testing out with your own imagesNote that the results you get might differ slightly from the blogpost as there is a degree of randomness in the way we split our dataset as well as the initialization of our neural network. Exploring and Processing the Data We will first have to download our dataset, CIFAR-10. The details of the dataset are as follows:- Images to be recognized: Tiny images of 32 * 32 pixels- Labels: 10 possible labels (airplane, automobile, bird, cat, deer, dog, frog, horse, ship, and truck)- Dataset size: 60000 images, split into 50000 for training and 10000 for testing
###Code
from keras.datasets import cifar10
(x_train, y_train), (x_test, y_test) = cifar10.load_data()
print('x_train shape:', x_train.shape)
print('y_train shape:', y_train.shape)
###Output
y_train shape: (50000, 1)
###Markdown
We will now take a look at an individual image. If we print out the first image of our training dataset (x_train[0]):
###Code
print(x_train[0])
###Output
[[[ 59 62 63]
[ 43 46 45]
[ 50 48 43]
...
[158 132 108]
[152 125 102]
[148 124 103]]
[[ 16 20 20]
[ 0 0 0]
[ 18 8 0]
...
[123 88 55]
[119 83 50]
[122 87 57]]
[[ 25 24 21]
[ 16 7 0]
[ 49 27 8]
...
[118 84 50]
[120 84 50]
[109 73 42]]
...
[[208 170 96]
[201 153 34]
[198 161 26]
...
[160 133 70]
[ 56 31 7]
[ 53 34 20]]
[[180 139 96]
[173 123 42]
[186 144 30]
...
[184 148 94]
[ 97 62 34]
[ 83 53 34]]
[[177 144 116]
[168 129 94]
[179 142 87]
...
[216 184 140]
[151 118 84]
[123 92 72]]]
###Markdown
In order to see the image as an image rather than a series of pixel value numbers, we will use a function from matplotlib:
###Code
import matplotlib.pyplot as plt
%matplotlib inline
img = plt.imshow(x_train[0])
print('The label is:', y_train[0])
###Output
The label is: [6]
###Markdown
Let's explore one more image, the second image (with index 1 instead of 0) in our training dataset:
###Code
img = plt.imshow(x_train[1])
print('The label is:', y_train[1])
###Output
The label is: [9]
###Markdown
What we really want is the probability of each of the 10 different classes. For that, we need 10 output neurons in our neural network. Since we have 10 output neurons, our labels must match this as well. To do this, we convert the label into a set of 10 numbers where each number represents if the image belongs to that class or not. So if an image belongs to the first class, the first number of this set will be a 1 and all other numbers in this set will be a 0. To convert our labels to our one-hot encoding, we use a function in Keras:
###Code
import keras
y_train_one_hot = keras.utils.to_categorical(y_train, 10)
y_test_one_hot = keras.utils.to_categorical(y_test, 10)
print('The one hot label is:', y_train_one_hot[1])
###Output
The one hot label is: [0. 0. 0. 0. 0. 0. 0. 0. 0. 1.]
###Markdown
A common step we do is to let the values to be between 0 and 1, which will aid in the training of our neural network. Since our pixel values already take the values between 0 and 255, we simply need to divide by 255.
###Code
x_train = x_train.astype('float32')
x_test = x_test.astype('float32')
x_train = x_train / 255
x_test = x_test / 255
x_train[0]
###Output
_____no_output_____
###Markdown
Building and Training our Convolutional Neural Network Similar to our first notebook, we need to define the architecture (template) first before fitting the best numbers into this architecture by learning from the data. In summary, the architecture we will build in this post is this:- Conv Layer (Filter size 3x3, Depth 32)- Conv Layer (Filter size 3x3, Depth 32)- Max Pool Layer (Filter size 2x2)- Dropout Layer (Prob of dropout 0.25)- Conv Layer (Filter size 3x3, Depth 64)- Conv Layer (Filter size 3x3, Depth 64)- Max Pool Layer (Filter size 2x2)- Dropout Layer (Prob of dropout 0.25)- FC Layer (512 neurons)- Dropout Layer (Prob of dropout 0.5)- FC Layer, Softmax (10 neurons)For an intuition behind these layers, please refer to Intuitive Deep Learning [Part 2](https://medium.com/intuitive-deep-learning/intuitive-deep-learning-part-2-cnns-for-computer-vision-24992d050a27).We will be using Keras to build our architecture. Let's import the code from Keras that we will need to use:
###Code
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten, Conv2D, MaxPooling2D
###Output
_____no_output_____
###Markdown
We then call an empty Sequential model and 'add' to this model layer by layer:
###Code
model = Sequential()
###Output
_____no_output_____
###Markdown
The first layer is a conv layer with filter size 3x3, stride size 1 (in both dimensions), and depth 32. The padding is the 'same' and the activation is 'relu' (these two settings will apply to all layers in our CNN). We add this layer to our empty sequential model using the function model.add().The first number 32 refers to the depth. The next pair of numbers (3,3) refer to the filter width and size. Then, we specify activation which is 'relu' and padding which is 'same'. Notice that we did not specify stride. This is because stride=1 is a default setting, and unless we want to change this setting, we need not specify it.If you recall, we also need to specify an input size for our first layer; subsequent layers does not have this specification since they can infer the input size from the output size of the previous layer.All that being said, our first layer in code looks like this:
###Code
model.add(Conv2D(32, (3, 3), activation='relu', padding='same', input_shape=(32,32,3)))
###Output
_____no_output_____
###Markdown
Our second layer looks like this in code (we don't need to specify the input size):
###Code
model.add(Conv2D(32, (3, 3), activation='relu', padding='same'))
###Output
_____no_output_____
###Markdown
The next layer is a max pooling layer with pool size 2 x 2 and stride 2 (in both dimensions). The default for a max pooling layer stride is the pool size, so we don't have to specify the stride:
###Code
model.add(MaxPooling2D(pool_size=(2, 2)))
###Output
_____no_output_____
###Markdown
Lastly, we add a dropout layer with probability 0.25 of dropout so as to prevent overfitting:
###Code
model.add(Dropout(0.25))
###Output
_____no_output_____
###Markdown
And there we have it, our first four layers in code. The next four layers look really similar (except the depth of the conv layer is 64 instead of 32):
###Code
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(Conv2D(64, (3, 3), activation='relu', padding='same'))
model.add(MaxPooling2D(pool_size=(2, 2)))
model.add(Dropout(0.25))
###Output
_____no_output_____
###Markdown
Lastly, we have to code in our fully connected layer, which is similar to what we've done in our previous post, [Build your first Neural Network](https://medium.com/intuitive-deep-learning/build-your-first-neural-network-to-predict-house-prices-with-keras-eb5db60232c). However, at this point, our neurons are spatially arranged in a cube-like format rather than in just one row. To make this cube-like format of neurons into one row, we have to first flatten it. We do so by adding a Flatten layer:
###Code
model.add(Flatten())
###Output
_____no_output_____
###Markdown
Now, we have a dense (FC) layer of 512 neurons with relu activation:
###Code
model.add(Dense(512, activation='relu'))
###Output
_____no_output_____
###Markdown
We add another dropout of probability 0.5:
###Code
model.add(Dropout(0.5))
###Output
_____no_output_____
###Markdown
And lastly, we have a dense (FC) layer with 10 neurons and softmax activation:
###Code
model.add(Dense(10, activation='softmax'))
###Output
_____no_output_____
###Markdown
And we're done with specifying our architecture! To see a summary of the full architecture, we run the code:
###Code
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
conv2d_1 (Conv2D) (None, 32, 32, 32) 896
_________________________________________________________________
conv2d_2 (Conv2D) (None, 32, 32, 32) 9248
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 16, 16, 32) 0
_________________________________________________________________
dropout_1 (Dropout) (None, 16, 16, 32) 0
_________________________________________________________________
conv2d_3 (Conv2D) (None, 16, 16, 64) 18496
_________________________________________________________________
conv2d_4 (Conv2D) (None, 16, 16, 64) 36928
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 8, 8, 64) 0
_________________________________________________________________
dropout_2 (Dropout) (None, 8, 8, 64) 0
_________________________________________________________________
flatten_1 (Flatten) (None, 4096) 0
_________________________________________________________________
dense_1 (Dense) (None, 512) 2097664
_________________________________________________________________
dropout_3 (Dropout) (None, 512) 0
_________________________________________________________________
dense_2 (Dense) (None, 10) 5130
=================================================================
Total params: 2,168,362
Trainable params: 2,168,362
Non-trainable params: 0
_________________________________________________________________
###Markdown
We now fill in the best numbers after we've specified our architecture. We'll compile the model with our settings below.The loss function we use is called categorical cross entropy, which is applicable for a classification problem of many classes. The optimizer we use here is Adam. We haven't gone through the intuition of Adam yet, but know that Adam is simply a type of stochastic gradient descent (with a few modifications) so that it trains better. Lastly, we want to track the accuracy of our model.
###Code
model.compile(loss='categorical_crossentropy',
optimizer='adam',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
And now, it's time to run our training.We train our model with batch size 32 and 20 epochs. We use the setting validation_split=0.2 instead of validation_data. With this shortcut, we did not need to split our dataset into a train and validation set at the start! Instead, we simply specify how much of our dataset will be used as a validation set. In this case, 20% of our dataset is used as a validation set. This will take a while on a CPU, so you might want to start training and get some coffee before coming back.
###Code
hist = model.fit(x_train, y_train_one_hot,
batch_size=32, epochs=20,
validation_split=0.2)
###Output
Train on 40000 samples, validate on 10000 samples
Epoch 1/20
40000/40000 [==============================] - 256s 6ms/step - loss: 1.5844 - acc: 0.4176 - val_loss: 1.1586 - val_acc: 0.5848
Epoch 2/20
40000/40000 [==============================] - 263s 7ms/step - loss: 1.1519 - acc: 0.5897 - val_loss: 0.9885 - val_acc: 0.6494
Epoch 3/20
40000/40000 [==============================] - 259s 6ms/step - loss: 0.9921 - acc: 0.6502 - val_loss: 0.8804 - val_acc: 0.6901
Epoch 4/20
40000/40000 [==============================] - 250s 6ms/step - loss: 0.8872 - acc: 0.6847 - val_loss: 0.8371 - val_acc: 0.6995
Epoch 5/20
40000/40000 [==============================] - 251s 6ms/step - loss: 0.8172 - acc: 0.7109 - val_loss: 0.7716 - val_acc: 0.7261
Epoch 6/20
40000/40000 [==============================] - 251s 6ms/step - loss: 0.7544 - acc: 0.7335 - val_loss: 0.7429 - val_acc: 0.7422
Epoch 7/20
40000/40000 [==============================] - 251s 6ms/step - loss: 0.7086 - acc: 0.7504 - val_loss: 0.7441 - val_acc: 0.7477
Epoch 8/20
40000/40000 [==============================] - 251s 6ms/step - loss: 0.6676 - acc: 0.7639 - val_loss: 0.7214 - val_acc: 0.7492
Epoch 9/20
40000/40000 [==============================] - 250s 6ms/step - loss: 0.6327 - acc: 0.7776 - val_loss: 0.7185 - val_acc: 0.7555
Epoch 10/20
40000/40000 [==============================] - 248s 6ms/step - loss: 0.6016 - acc: 0.7888 - val_loss: 0.6891 - val_acc: 0.7656
Epoch 11/20
40000/40000 [==============================] - 249s 6ms/step - loss: 0.5660 - acc: 0.7996 - val_loss: 0.6867 - val_acc: 0.7626
Epoch 12/20
40000/40000 [==============================] - 248s 6ms/step - loss: 0.5476 - acc: 0.8064 - val_loss: 0.6849 - val_acc: 0.7698
Epoch 13/20
40000/40000 [==============================] - 248s 6ms/step - loss: 0.5316 - acc: 0.8115 - val_loss: 0.6887 - val_acc: 0.7678
Epoch 14/20
40000/40000 [==============================] - 248s 6ms/step - loss: 0.5002 - acc: 0.8246 - val_loss: 0.6931 - val_acc: 0.7731
Epoch 15/20
40000/40000 [==============================] - 245s 6ms/step - loss: 0.4917 - acc: 0.8246 - val_loss: 0.7365 - val_acc: 0.7660
Epoch 16/20
40000/40000 [==============================] - 248s 6ms/step - loss: 0.4690 - acc: 0.8374 - val_loss: 0.7153 - val_acc: 0.7693
Epoch 17/20
40000/40000 [==============================] - 245s 6ms/step - loss: 0.4592 - acc: 0.8377 - val_loss: 0.6857 - val_acc: 0.7755
Epoch 18/20
40000/40000 [==============================] - 248s 6ms/step - loss: 0.4519 - acc: 0.8416 - val_loss: 0.6918 - val_acc: 0.7741
Epoch 19/20
40000/40000 [==============================] - 246s 6ms/step - loss: 0.4330 - acc: 0.8461 - val_loss: 0.6926 - val_acc: 0.7739
Epoch 20/20
40000/40000 [==============================] - 246s 6ms/step - loss: 0.4242 - acc: 0.8493 - val_loss: 0.7026 - val_acc: 0.7785
###Markdown
After you've done training, we can visualize the model training and validation loss as well as training / validation accuracy over the number of epochs using the below code:
###Code
plt.plot(hist.history['loss'])
plt.plot(hist.history['val_loss'])
plt.title('Model loss')
plt.ylabel('Loss')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='upper right')
plt.show()
plt.plot(hist.history['acc'])
plt.plot(hist.history['val_acc'])
plt.title('Model accuracy')
plt.ylabel('Accuracy')
plt.xlabel('Epoch')
plt.legend(['Train', 'Val'], loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
Once we are done with tweaking our hyperparameters, we can run it on our test dataset below:
###Code
model.evaluate(x_test, y_test_one_hot)[1]
###Output
10000/10000 [==============================] - 14s 1ms/step
###Markdown
At this point, you might want to save your trained model (since you've spent so long waiting for it to train). The model will be saved in a file format called HDF5 (with the extension .h5). We save our model with this line of code:
###Code
model.save('my_cifar10_model.h5')
###Output
_____no_output_____
###Markdown
Testing out with your own images Now that we have a model, let's try it on our own images. To do so, place your image in the same directory as your notebook. For the purposes of this post, I'm going to use an image of a cat (which you can download here(link)). Now, we read in our JPEG file as an array of pixel values:
###Code
my_image = plt.imread("cat.jpg")
###Output
_____no_output_____
###Markdown
The first thing we have to do is to resize the image of our cat so that we can fit it into our model (input size of 32 * 32 * 3).
###Code
from skimage.transform import resize
my_image_resized = resize(my_image, (32,32,3))
img = plt.imshow(my_image_resized)
###Output
_____no_output_____
###Markdown
And now, we see what our trained model will output when given an image of our cat, using this code:
###Code
import numpy as np
probabilities = model.predict(np.array( [my_image_resized,] ))
probabilities
number_to_class = ['airplane', 'automobile', 'bird', 'cat', 'deer', 'dog', 'frog', 'horse', 'ship', 'truck']
index = np.argsort(probabilities[0,:])
print("Most likely class:", number_to_class[index[9]], "-- Probability:", probabilities[0,index[9]])
print("Second most likely class:", number_to_class[index[8]], "-- Probability:", probabilities[0,index[8]])
print("Third most likely class:", number_to_class[index[7]], "-- Probability:", probabilities[0,index[7]])
print("Fourth most likely class:", number_to_class[index[6]], "-- Probability:", probabilities[0,index[6]])
print("Fifth most likely class:", number_to_class[index[5]], "-- Probability:", probabilities[0,index[5]])
###Output
Most likely class: cat -- Probability: 0.31140402
Second most likely class: horse -- Probability: 0.296455
Third most likely class: dog -- Probability: 0.1401798
Fourth most likely class: truck -- Probability: 0.12088975
Fifth most likely class: frog -- Probability: 0.078746535
|
jupyter_notebooks/Testing_dataset_unet.ipynb | ###Markdown
Importing and Installing libraries
###Code
# Install required libs
### please update Albumentations to version>=0.3.0 for `Lambda` transform support
!pip install -U albumentations>=0.3.0 --user
!pip install -U --pre segmentation-models --user
#This installation command is to resolve issues with respect to efficient not being found in the initila build of segmentation modle sm
!pip install -U git+https://github.com/qubvel/segmentation_models
#uPGRADE SCKITI IMAGE
!pip install --upgrade scikit-image
#RESTART THE KERNEL POST INSTALLATION
#This is to resolve the dependency issues with skimage.
!pip install numpy==1.17
!pip install ipdb
!pip install pandas_ml
!pip install nibabel pydicom medpy
!pip install seaborn -U
#RESTART THE KERNEL POST INSTALLATION of cell above
import os
#Confirmation that GPU is in working order.
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from medpy.io import load as load_segcaps
import tensorflow as tf
from sklearn.model_selection import train_test_split
import glob
import cv2
import keras
import numpy as np
import matplotlib.pyplot as plt
import imageio
import albumentations as A
import random
import segmentation_models as sm
import datetime
import itertools
from sklearn.utils import class_weight
import imageio
import numpy as np
import pickle
import ipdb
#os.environ['CUDA_VISIBLE_DEVICES'] = '0'
from keras.models import load_model
from scipy.spatial import distance
from collections import OrderedDict
from keras import backend as K_b
import shutil
import time
from PIL import Image
from pandas_ml import ConfusionMatrix
from keras.preprocessing.image import ImageDataGenerator
import pandas as pd
import seaborn as sns
import pathlib
from medpy.io import load
from sklearn.metrics import precision_recall_fscore_support
print(tf.test.gpu_device_name())
with tf.device('/gpu:0'):
a = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[2, 3], name='a')
b = tf.constant([1.0, 2.0, 3.0, 4.0, 5.0, 6.0], shape=[3, 2], name='b')
c = tf.matmul(a, b)
with tf.Session() as sess:
print (sess.run(c))
###Output
/device:GPU:0
[[22. 28.]
[49. 64.]]
###Markdown
Loading data for analysis
###Code
test_data_dir='/home/ec2-user/SageMaker/data/50_imgs/test/NIFTI_MR_256x256_png_256grey_lvl/t1dual_inphase'
train_data_dir='/home/ec2-user/SageMaker/data/50_imgs/train/NIFTI_MR_256x256_png_256grey_lvl/t1dual_inphase'
valid_data_dir='/home/ec2-user/SageMaker/data/50_imgs/valid/NIFTI_MR_256x256_png_256grey_lvl/t1dual_inphase'
#Destination directory
dst_dir='/home/ec2-user/SageMaker/data/250_imgs/merge/NIFTI_MR_256x256_png_256grey_lvl/t1dual_inphase'
x_dst_dir = os.path.join(dst_dir, 'images')
y_dst_dir = os.path.join(dst_dir, 'masks')
paths_merge=list(zip(glob.glob(x_dst_dir+'/*.png'),
glob.glob(y_dst_dir+'/*.png')))
x_test_dir = os.path.join(test_data_dir, 'images')
y_test_dir = os.path.join(test_data_dir, 'masks')
x_train_dir = os.path.join(train_data_dir, 'images')
y_train_dir = os.path.join(train_data_dir, 'masks')
x_valid_dir = os.path.join(valid_data_dir, 'images')
y_valid_dir = os.path.join(valid_data_dir, 'masks')
###Output
_____no_output_____
###Markdown
Merging of test/train and validation data for running per image prediction using keras prediction generator
###Code
x_merge=[x_train_dir,x_test_dir,x_valid_dir]
y_merge=[y_train_dir,y_test_dir,y_valid_dir]
for x_dir,y_dir in list(zip(x_merge,y_merge)):
x_file_list=glob.glob(x_dir+'/*.png')
y_file_list=glob.glob(os.path.join(y_dir,'*.png'))
[shutil.copy(x_tmp,os.path.join(x_dst_dir,os.path.basename(x_tmp))) for x_tmp in x_file_list]
[shutil.copy(y_tmp,os.path.join(y_dst_dir,os.path.basename(y_tmp))) for y_tmp in y_file_list]
###Output
_____no_output_____
###Markdown
Resizing of all images to match 256,256 size resolution
###Code
for vals in paths_merge:
trl_img=imageio.imread(vals[0])
trl_mask=imageio.imread(vals[1])
#trl_imgs_set={vals[0]:resize_img_PIL(trl_img),vals[1]:resize_img_PIL(trl_mask)}
#ipdb.set_trace()
#[imageio.imwrite(k,v) for k,v in trl_imgs_set.items() if type(v) is not str]
if trl_mask.shape!=(256,256):
print(vals[0])
###Output
_____no_output_____
###Markdown
Trouble shooting area of test dataset and model loads to ensure generators are working correctly
###Code
test_dataset=gen_test_dataset(dst_dir,model_gnrl_params,preprocess_input,pred_gen_var=False)
CLASSES = ['l_kidney','liver','r_kidney','spleen']
n_classes = 1 if len(CLASSES) == 1 else (len(CLASSES) + 1)
test_dataset = Dataset(
x_dst_dir,
y_dst_dir,
classes=CLASSES,
preprocessing=get_preprocessing(preprocess_input),
augmentation=get_validation_augmentation(),ret_img_path=True)
#Local parameters for analysis
lcl_wghts_dir='/home/ec2-user/SageMaker/Masters-Thesis-UNet-repository/jupyter_notebooks/weights_history_full/cat_focal_loss/btch_sz_7/lr_0.0003/weights/t1dual_inphase_all_orgs_grey_lvl_256_optm_Adam_loss_cat_focal_loss_trn_samp_sz_250_btch_sz_7_lr_0.0003_time_2019-11-18_00000096.h5'
lcl_wghts_dir_2='/home/ec2-user/SageMaker/Masters-Thesis-UNet-repository/jupyter_notebooks/weights_history_full/cat_focal_loss/btch_sz_7/lr_0.0003/weights/t1dual_inphase_all_orgs_grey_lvl_256_optm_Adam_loss_cat_focal_loss_trn_samp_sz_250_btch_sz_7_lr_0.0003_time_2019-11-18_00000093.h5'
optimiser_tmp=keras.optimizers.Adam(0.0003)
total_loss_tmp=sm.losses.CategoricalFocalLoss()
start_time=time.time()
model_tmp=gen_test_model(model_gnrl_params,optimiser_tmp,total_loss_tmp,lcl_wghts_dir_2)
end_time=time.time()-start_time
print('processsing time:',end_time)
import time
start_time=time.time()
output=model_tmp.predict_generator(test_dataset,steps=len(test_dataset))
end_time=time.time()-start_time
print('processsing time:',end_time)
trl_path=os.path.join(x_dst_dir,
'pat_id_38_t1dual_inphase_slice_no_21_256grey_lvl_256x256.png')
trl_img=imageio.imread(trl_path)
trl_img.shape
img,mask,img_nm=test_dataset[1]
test_dataloader = Dataloder(test_dataset, batch_size=1, shuffle=False)
###Output
_____no_output_____
###Markdown
Loading function calls for analysis Data loader and dataset functions
###Code
def resize_img_PIL(img:np.ndarray,shp_sp=(256,256)):
if img.shape!=shp_sp:
PIL_img=Image.fromarray(img)
np_img_reshp=np.array(PIL_img.resize(shp_sp))
return np_img_reshp
else:
return img
# helper function for data visualization
def visualize(fig_nm=None,figdim=(33,3.1),**images):
"""PLot images in one row."""
n = len(images)
print(fig_nm)
plt.figure(figsize=figdim)
for i, (name, image) in enumerate(images.items()):
plt.subplot(1, n, i + 1)
plt.xticks([])
plt.yticks([])
plt.title(' '.join(name.split('_')).title())
plt.imshow(image)
if fig_nm is not None:
plt.savefig(fig_nm,dpi=150)
plt.clf()
else:
plt.show()
# helper function for data visualization
def denormalize(x):
"""Scale image to range 0..1 for correct plot"""
x_max = np.percentile(x, 98)
x_min = np.percentile(x, 2)
x = (x - x_min) / (x_max - x_min)
x = x.clip(0, 1)
return x
# classes for data loading and preprocessing
class Dataset:
"""CamVid Dataset. Read images, apply augmentation and preprocessing transformations.
Args:
images_dir (str): path to images folder
masks_dir (str): path to segmentation masks folder
class_values (list): values of classes to extract from segmentation mask
augmentation (albumentations.Compose): data transfromation pipeline
(e.g. flip, scale, etc.)
preprocessing (albumentations.Compose): data preprocessing
(e.g. noralization, shape manipulation, etc.)
"""
CLASSES = {'background':0,'liver':63,'r_kidney':126,'l_kidney':189,'spleen':252}
def __init__(
self,
images_dir,
masks_dir,
classes=None,
augmentation=None,
preprocessing=None,
ret_img_path=False
):
self.ids = os.listdir(images_dir)
self.images_fps = [os.path.join(images_dir, image_id) for image_id in self.ids]
self.masks_fps = [os.path.join(masks_dir, image_id) for image_id in self.ids]
# convert str names to class values on masks
self.class_values = [self.CLASSES[cls.lower()] for cls in classes]
self.ret_img_path=ret_img_path
self.augmentation = augmentation
self.preprocessing = preprocessing
def __getitem__(self, i):
# read data
image = imageio.imread(self.images_fps[i])#cv2.imread(self.images_fps[i])
image_nm=os.path.basename(self.images_fps[i])
#image = cv2.cvtColor(image, cv2.COLOR_BGR2RGB)
image=np.expand_dims(image,axis=2)
mask = cv2.imread(self.masks_fps[i], 0)
# extract certain classes from mask (e.g. cars)
masks = [(mask == v) for v in self.class_values]
mask = np.stack(masks, axis=-1).astype('float')
# add background if mask is not binary
if mask.shape[-1] != 1:
background = 1 - mask.sum(axis=-1, keepdims=True)
mask = np.concatenate((mask, background), axis=-1)
# apply augmentations
if self.augmentation:
sample = self.augmentation(image=image, mask=mask)
image, mask = sample['image'], sample['mask']
# apply preprocessing
if self.preprocessing:
sample = self.preprocessing(image=image, mask=mask)
image, mask = sample['image'], sample['mask']
if self.ret_img_path==True:
return image, mask,image_nm
else:
return image, mask
def __len__(self):
return len(self.ids)
class CustomDataloder(keras.utils.Sequence):
"""Load data from dataset and form batches
Args:
dataset: instance of Dataset class for image loading and preprocessing.
batch_size: Integet number of images in batch.
shuffle: Boolean, if `True` shuffle image indexes each epoch.
"""
def __init__(self, dataset, batch_size=1, shuffle=False):
self.dataset = dataset
self.batch_size = batch_size
self.shuffle = shuffle
self.indexes = np.arange(len(dataset))
self.on_epoch_end()
def __getitem__(self, i):
# collect batch data
start = i * self.batch_size
stop = (i + 1) * self.batch_size
data = []
for j in range(start, stop):
data.append(self.dataset[j])
# transpose list of lists
batch = [np.stack(samples, axis=0) for samples in zip(*data)]
return batch
def __len__(self):
"""Denotes the number of batches per epoch"""
return len(self.indexes) // self.batch_size
def on_epoch_end(self):
"""Callback function to shuffle indexes each epoch"""
if self.shuffle:
self.indexes = np.random.permutation(self.indexes)
def round_clip_0_1(x, **kwargs):
return x.round().clip(0, 1)
# define heavy augmentations
def get_training_augmentation(dim_sp=256):
rand_int_alpha=random.uniform(0,3)
if rand_int_alpha<=0.5:
rand_int_sigma=random.uniform(0.1,rand_int_alpha)
elif rand_int_alpha>=2:
rand_int_sigma=random.uniform(rand_int_alpha/1.8,rand_int_alpha)
else:
rand_int_sigma=random.uniform(rand_int_alpha/1.8,rand_int_alpha)
train_transform = [
#A.RandomGridShuffle(p=0.4,grid=(8, 8)),
A.ElasticTransform(p=0.9,alpha=rand_int_alpha,sigma=rand_int_sigma,border_mode=cv2.BORDER_REPLICATE), #,alpha_affine=20
A.HorizontalFlip(p=0.5),
A.VerticalFlip(p=0.5),
#A.RandomSizedCrop(p=0.5),
A.ShiftScaleRotate(scale_limit=0.5, rotate_limit=90, shift_limit=0.1, p=0.5, border_mode=cv2.BORDER_REPLICATE),
#A.PadIfNeeded(min_height=dim_sp, min_width=dim_sp, always_apply=True, border_mode=cv2.BORDER_REPLICATE),
#A.RandomCrop(height=dim_sp, width=dim_sp, always_apply=True),
A.OneOf(
[
A.IAASharpen(p=0.5),
A.Blur(blur_limit=3, p=0.5)
],
p=0.2,
),
A.Lambda(mask=round_clip_0_1)
]
return A.Compose(train_transform)
def get_validation_augmentation():
"""Add paddings to make image shape divisible by 32"""
test_transform = [
A.PadIfNeeded(256, 256)
]
return A.Compose(test_transform)
def get_preprocessing(preprocessing_fn):
"""Construct preprocessing transform
Args:
preprocessing_fn (callbale): data normalization function
(can be specific for each pretrained neural network)
Return:
transform: albumentations.Compose
"""
_transform = [
A.Lambda(image=preprocessing_fn),
]
return A.Compose(_transform)
def keras_flow_from_dir(dst_dir,preprocess_input,
target_size_var=(256, 256),batch_size_var=7):
"""Creation of template based keras image generator for batch scale prediction for efficient processing."""
gen_test_2 =ImageDataGenerator(preprocessing_function = preprocess_input)
dst_dir=dst_dir+'_keras_dataloader' if dst_dir.find('_keras_dataloader')==-1 else dst_dir
dataloader=gen_test_2.flow_from_directory(dst_dir,target_size=target_size_var,
batch_size=batch_size_var,
class_mode=None,color_mode='grayscale',shuffle=False)
return dataloader
#tmp_v=test_dataset.next()
def gen_subdir_file_lst(dir_nm:str,file_sub_str:str):
"""The purpose of this method is to generate a file list of all h5 weights sorted for completing batch prediction"""
#ipdb.set_trace()
final_list=[]
for root,subdir,files in os.walk(dir_nm):
if len(files)>0:
file_list=glob.glob(root+file_sub_str)
final_list=final_list+file_list
#Sorted to ensure history part 2,3 etc are synced together
return sorted(final_list)
def get_file_info(file,add_info):
"""The purpose of this method is to pull file information from the file name presnet in the string"""
#ipdb.set_trace()
split_vals=file[:-14].split('_')
split_vals.sort()
file_dict={}
#Iterate through additional information of set of tuples on file strings for analysis
for param_k,param_v in add_info:
file_dict[param_k]=[x for x in param_v if x in split_vals][0]
file_dict['epoch_no']=99#int(split_vals[-1][:-3])
return file_dict,split_vals
def gen_test_dataset(test_data_dir,model_gnrl_params,preprocess_input,ret_img_path_var):
x_test_dir=os.path.join(test_data_dir,'images')
y_test_dir=os.path.join(test_data_dir,'masks')
return Dataset(x_test_dir, y_test_dir,
classes=model_gnrl_params['classes'],
augmentation=get_validation_augmentation(),
preprocessing=get_preprocessing(preprocess_input),
ret_img_path=ret_img_path_var)
###Output
_____no_output_____
###Markdown
Generate model and test model functions
###Code
def gen_test_model(model_gnrl_param:dict,lrn_rate,total_loss,wghts_dir:str,cls_wghts_perc=None):
"""The purpose of this method is to generate a test model from the directory for analysis"""
loss_func={'cat':sm.losses.CategoricalCELoss(class_weights=cls_wghts_perc),
'wce':sm.losses.CategoricalCELoss(class_weights=cls_wghts_perc),
'focal':sm.losses.CategoricalFocalLoss(),
'dice':sm.losses.DiceLoss(class_weights=cls_wghts_perc)}
optim=keras.optimizers.Adam(lrn_rate)
reload_model = sm.Unet(model_gnrl_param['backbone'], classes=model_gnrl_param['n_classes'],
activation=model_gnrl_param['activation_type'],
encoder_weights=None,
input_shape=(None, None,model_gnrl_param['input_shape_N']))
reload_model.compile(optim,loss_func[total_loss],model_gnrl_param['metrics'])
reload_model.load_weights(wghts_dir)
return reload_model
###Output
_____no_output_____
###Markdown
Metrics for analysis
###Code
def gen_test_scores(model,test_dataloader,metrics)->dict:
"""The purpose of this method is to generate a summary dictionary
of a model test set metrics for analysis"""
metric_dict={}
scores = model.evaluate_generator(test_dataloader)
metric_dict["loss"]=scores[0]
for metric, value in zip(metrics, scores[1:]):
metric_dict[metric.__name__]=value
return metric_dict
def gen_per_class_dice_loss(y_true:np.ndarray,y_pred:np.ndarray,
channel='channel_last',
dice_dict=OrderedDict(left_kidney=0,liver=0,right_kidney=0,spleen=0,background=0))->dict:
"""The purpose of this method is to generate a dice loss for each organ within the logits present"""
assert y_true.shape==y_pred.shape,'Error predicted and ground tensors incorrect shape'
#ipdb.set_trace()
if channel=='channel_last':
channel_idx=2
assert y_true.shape[2]==len(dice_dict.keys()),'dictionary and prediction labels do not match'
tmp_dict={tup_st[0]:dice_score(y_true[:,:,i],y_pred[:,:,i]) for i,tup_st in enumerate(dice_dict.items())}
else:
channel_idx=0
assert y_true.shape[0]==len(dice_dict.keys()),'dictionary and prediction labels do not match'
tmp_dict={tup_st[0]:dice_score(y_true[i,:,:],y_pred[i,:,:]) for i,tup_st in enumerate(dice_dict.items())}
return tmp_dict
def dice_score(y_true_arr,y_pred_arr):
"""Return single f1 score for mask image for specific class. """
from sklearn.metrics import f1_score
zero_sum_chk=np.count_nonzero(y_true_arr)+np.count_nonzero(y_pred_arr)
if len(np.unique(y_true_arr))>2 or len(np.unique(y_pred_arr))>2:
print('non binary masks')
print(np.unique(y_true_arr))
print(np.unique(y_pred_arr))
if zero_sum_chk==0:
return 'NaN no classes in image'
else:
#ipdb.set_trace()
return f1_score(y_true_arr.astype(np.int64).flatten(),
y_pred_arr.astype(np.int64).flatten(),average='binary')
def cond_gen_dir(dst_dir_val):
if os.path.isdir(dst_dir_val)==True:
pass
else:
try:
os.mkdir(dst_dir_val)
except FileNotFoundError as e:
print('creating a nested directory',dst_dir_val)
os.makedirs(dst_dir_val)
def gen_test_images(tmp_model,test_dataset,file_dict:dict,model_dir,mode,test_dataloder=None):
"""Generate testing images for analysis"""
loss_func=file_dict['loss_type']
lrn_rate=file_dict['learn_rate']
epoch_no=str(file_dict['epoch_no'])
btch_sz=str(file_dict['btch_sz'])
dst_dir=os.path.join(model_dir,'predict_imgs',loss_func+'_btch_sz_'+btch_sz+'_lr_'+lrn_rate+'_epoch_no_'+epoch_no)
#Generating new paths based on conditional path function for nested paths
#ipdb.set_trace()
dst_dir_logits=os.path.join(dst_dir,'prob_logits')
dst_dir_imgs=os.path.join(dst_dir,'images')
[cond_gen_dir(x) for x in [dst_dir_logits,dst_dir_imgs]]
#Making a directory based on initial analysis
if mode=='per_img_visualise':
dice_score_lst=per_img_prediction_keras(test_dataset,tmp_model,dst_dir_imgs,dst_dir_logits,file_dict)
else:
dice_score_lst=per_batch_img_prediction_keras(test_dataset,test_dataloder,
tmp_model,dst_dir_imgs,dst_dir_logits,file_dict)
#Summary path and dataframe
summary_path=os.path.join(dst_dir,'summary.csv')
tmp_df=pd.DataFrame(dice_score_lst)
tmp_df.to_csv(summary_path)
return dice_score_lst
def per_batch_img_prediction_keras(test_dataset,test_dataloder,tmp_model,dst_dir_imgs,dst_dir_logits,file_dict):
dice_score_lst=[]
#Squeezing predicted images down to pass testing.
num_files=len(test_dataset)
no_of_batches=len(test_dataloder)
#Reseting test dataloader for each prediction to ensure consistent indexing
test_dataloder.reset()
img_nms_lst=test_dataloder.filenames
pr_masks = tmp_model.predict_generator(test_dataloder,steps=no_of_batches)
no_imgs=pr_masks.shape[0]
#Creating logit binary mask for prediction writing logit file as well to file
test_dataset_ids=test_dataset.ids
for i in range(0,no_imgs):
#Generating image dataset and mask
#ipdb.set_trace()
img_nm=os.path.basename(img_nms_lst[i])
dst_img_path=os.path.join(dst_dir_imgs,'predict_binary_'+img_nm)
#Getting image mask from test dataset based on mask
gt_mask_idx=[i for i in range(0,num_files) if test_dataset_ids[i]==img_nm][0]
image,gt_mask,_=test_dataset[gt_mask_idx]
#Converting softmax logit to binary logit
pr_mask_sqz=write_logit_to_file(pr_masks[i,:,:,:],dst_dir_logits,img_nm)
#Writing final output to file
write_prediction_output(pr_masks[i,:,:,:],dst_dir_logits,
img_nm,gt_mask,dst_img_path,image,file_dict)
dice_loss_per_class=gen_full_dice_row(gt_mask,pr_mask_sqz,file_dict,img_nm)
dice_score_lst.append(dice_loss_per_class)
return dice_score_lst
def per_img_prediction_keras(test_dataset,tmp_model,dst_dir_imgs,dst_dir_logits,file_dict):
"""Per image prediction script to write all images to file prediciting on a per image basis"""
num_files=len(test_dataset)
dice_score_lst=[]
for i in range(0,num_files):
#Generating image dataset and mask
image,gt_mask,img_nm=test_dataset[i]
dst_img_path=os.path.join(dst_dir_imgs,'predict_binary_'+img_nm)
#Getting images setup for testing
image = np.expand_dims(image, axis=0)
#Squeezing predicted images down to pass testing.
pr_mask = tmp_model.predict(image)
pr_mask_sqz_logit=pr_mask.squeeze()
#Creating logit binary mask for prediction writing logit file as well to file
write_prediction_output(pr_mask_sqz_logit,dst_dir_logits,
img_nm,gt_mask,dst_img_path,image,file_dict)
dice_loss_per_class=gen_full_dice_row(gt_mask,pr_mask_sqz,file_dict,img_nm,file_dict)
dice_score_lst.append(dice_loss_per_class)
return dice_score_lst
def write_prediction_output(pr_mask_sqz_logit,dst_dir_logits,img_nm,
gt_mask,dst_img_path,image,file_dict):
pr_mask_sqz=write_logit_to_file(pr_mask_sqz_logit,dst_dir_logits,img_nm)
if file_dict['epoch_no']>96:
#Writing line of predicted images to file for analysis and verification.
visualize(dst_img_path,
image=denormalize(image.squeeze()),
gt_mask_l_kidney=gt_mask[:,:,0],
pr_mask_l_kidney=pr_mask_sqz[:,:,0],
gt_mask_liver=gt_mask[:,:,1],
pr_mask_liver=pr_mask_sqz[:,:,1],
gt_mask_r_kidney=gt_mask[:,:,2],
pr_mask_r_kidney=pr_mask_sqz[:,:,2],
gt_mask_spleen=gt_mask[:,:,3],
pr_mask_spleen=pr_mask_sqz[:,:,3],
gt_mask_background=gt_mask[:,:,4],
pr_mask_background=pr_mask_sqz[:,:,4],
)
def gen_full_dice_row(gt_mask:np.ndarray,pr_mask_sqz:np.ndarray,file_dict,img_nm)->dict:
#Generate dice loss per image
dice_loss_per_class=gen_per_class_dice_loss(gt_mask,pr_mask_sqz)
dice_loss_per_class['file_nm']=img_nm
dice_loss_per_class['loss_func']=file_dict['loss_type']
dice_loss_per_class['btch_sz']=float(file_dict['btch_sz'])
dice_loss_per_class['learn_rate']=float(file_dict['learn_rate'])
dice_loss_per_class['epoch_no']=float(file_dict['epoch_no'])
return dice_loss_per_class
def logit_binarize(logit_arr):
"""The purpose of this method is to perform softmax binarisation of logit array """
return np.where(logit_arr.max(axis=2,keepdims=True) == logit_arr,1,0).astype(np.float64)
def write_logit_to_file(pr_mask_sqz_logit,dst_dir_logits,img_nm):
pr_mask_sqz=logit_binarize(pr_mask_sqz_logit)
dst_logit_path=os.path.join(dst_dir_logits,'predict_logit_'+os.path.splitext(img_nm)[0])
np.save(dst_logit_path,pr_mask_sqz_logit)
return pr_mask_sqz
def get_test_set_df(model_dir,test_data_dir,
model_gnrl_param_dict,add_info,mode='Test',
cls_wghts_perc_var=np.array([0.03987201, 0.36867433, 0.35872208, 0.2314718 , 0.00125978])):
"""The purpose of this method is to generate a """
#Generating assertion
assert mode.lower() in ['test','batch_visualise','per_img_visualise'],'incorrect mode selection'
#ipdb.set_trace()
#Generating list of paths to models weights for analysis
model_weights_dir=gen_subdir_file_lst(model_dir,'/*.h5')
#Model weights directory
#model_weights_dir=[x for x in model_weights_dir if x.find('focal')==-1]
model_weights_dir=[x for x in model_weights_dir if x.find('wce')==-1]
model_weights_dir=[x for x in model_weights_dir if x.find('cat_ce')==-1]
tmp_dice_lst_2=['dice','0.1','0.0003']
model_weights_dir=[x for x in model_weights_dir if len([y for y in tmp_dice_lst_2 if x.find(y)!=-1])<2]
#ipdb.set_trace()
#Generating dataset for analysis
preprocess_input = sm.get_preprocessing(model_gnrl_param_dict['backbone'])
test_dataset=gen_test_dataset(test_data_dir,model_gnrl_params,preprocess_input,ret_img_path_var=True)
if mode.lower()=='batch_visualise':
test_dataloader = keras_flow_from_dir(test_data_dir,preprocess_input)
elif mode.lower()=='test':
test_dataset=gen_test_dataset(test_data_dir,model_gnrl_params,preprocess_input,ret_img_path_var=False)
test_dataloader = CustomDataloder(test_dataset, batch_size=1, shuffle=False)
#ipdb.set_trace()
#Final json list to return for analysis
final_lst=[]
for fl_path in model_weights_dir:
print(fl_path)
#ipdb.set_trace()
#File path dictionayr information for analysis
file_dict,split_vals=get_file_info(fl_path,add_info)
#ipdb.set_trace()
#Generating specific parameters for laoding the model based on file nam
start_model=time.time()
#Load temporary model for analysis
tmp_model=gen_test_model(model_gnrl_param_dict,
float(file_dict['learn_rate']),
file_dict['loss_type'],fl_path,cls_wghts_perc_var)
finish_model=time.time()
#Test model based on dataset and analysis
if mode.lower()=='test':
tmp_score_dict=gen_test_scores(tmp_model,test_dataloader,model_gnrl_params['metrics'])
file_dict.update(tmp_score_dict)
final_lst.append(file_dict)
else:
#ipdb.set_trace()
start=time.time()
tmp_lst=gen_test_images(tmp_model,test_dataset,file_dict,model_dir,mode,test_dataloader)
finish=time.time()
print('total time loading model:',finish_model-start_model)
print('total time predicting images:',finish-start)
#Merging list of dictionaries for analysis
final_lst=final_lst+tmp_lst
#print(pd.DataFrame.from_dict(tmp_lst[:10]))
K_b.clear_session()
return final_lst
###Output
_____no_output_____
###Markdown
Load initial parameters for analysis
###Code
#Loading arguments for analysis
#'efficientnetb3'densenet121
#BATCH_SIZE = 3
CLASSES = ['l_kidney','liver','r_kidney','spleen']
activation = 'sigmoid' if len(CLASSES) == 1 else 'softmax'
metrics = [sm.metrics.IOUScore(threshold=0.5), sm.metrics.FScore(threshold=0.5)]
cls_wghts_perc=np.array([0.03987201, 0.36867433, 0.35872208, 0.2314718 , 0.00125978])
#Getting keys for different analysis types
add_info=[('learn_rate',['0.0003','0.001','0.01','0.1']),
('samp_sz',['500','250','50']),('btch_sz',['3','7']),
('loss_type',['dice','focal','wce'])]
model_gnrl_params={'backbone':'resnet101','n_classes':len(CLASSES)+1,
'metrics':metrics,'input_shape_N':1,
'activation_type':activation,'classes':['l_kidney','liver','r_kidney','spleen']}
#Generating weights directory for iterating for analysis
#Directory lists
model_dir='/home/ec2-user/SageMaker/data/unet_data_aug_modified_results/data_aug_all_param_reducd_50perc/cat_focal_loss/btch_sz_3/lr_0.001/final_epch_wghts'
test_data_dir='/home/ec2-user/SageMaker/data/250_imgs/merge/NIFTI_MR_256x256_png_256grey_lvl/t1dual_inphase'
preprocess_input = sm.get_preprocessing(model_gnrl_params['backbone'])
trl_ls=[]
for vals in model_weights_dir:
tmp_dict={}
val_split=vals.split('/')
tmp_dict['lrn_rate']=val_split[-3]
tmp_dict['btch_sz']=val_split[-4]
tmp_dict['loss']=val_split[-5]
trl_ls.append(tmp_dict)
trl_df=pd.DataFrame(trl_ls)
#trl_df.drop_duplicates(inplace=True)
trl_df.shape
model_results=get_test_set_df(model_dir,test_data_dir,
model_gnrl_params,add_info)
model_df=pd.DataFrame.from_dict(model_results)
model_df.to_csv('unet_model_test_data_summary_results_05_11_2019.csv')
model_df.sort_values('f1-score',ascending=False)
###Output
_____no_output_____
###Markdown
Visualisation of results Testing on single set of parameters
###Code
fl_path='/home/ec2-user/SageMaker/Masters-Thesis-UNet-repository/jupyter_notebooks/weights_history_full/wce_loss/btch_sz_3/lr_0.001/weights/t1dual_inphase_all_orgs_grey_lvl_256_optm_Adam_loss_wce_loss_trn_samp_sz_250_btch_sz_3_lr_0.001_time_2019-11-17_00000006.h5'
loss='focal'
#gen_test_model(model_gnrl_param:dict,lrn_rate,total_loss,wghts_dir:str,cls_wghts_perc=None)
tmp_model=gen_test_model(model_gnrl_params,
0.003,
loss,fl_path)
#test_dataset_visualise=gen_test_dataset(test_data_dir,model_gnrl_params,preprocess_input,)
tmp_model.count_params()
###Output
_____no_output_____
###Markdown
Testing across all parameters please check gen_test_set_df for list filters however!
###Code
model_dir='/home/ec2-user/SageMaker/data/unet_data_aug_modified_results/data_aug_all_param_reducd_25perc/cat_focal_loss/btch_sz_3/final_epch_wghts'
tmp_subdir_lst=['50','250','500']
for file_sz in tmp_subdir_lst:
model_dir_gnrl=os.path.join(model_dir,file_sz+'_imgs')
model_results=get_test_set_df(model_dir_gnrl,test_data_dir,
model_gnrl_params,add_info,mode='batch_visualise')
###Output
_____no_output_____
###Markdown
Comparing plots of predicted images to masks to determine if kidneys predict more kidneys and vice versa Generating initial dataset for analysis
###Code
src_logit_dir='/home/ec2-user/SageMaker/data/unet_predict_logits/500_imgs/500_img/predict_imgs/focal_btch_sz_3_lr_0.0003_epoch_no_99/prob_logits'
src_mask_dir='/home/ec2-user/SageMaker/data/500_imgs/'
#Getting the logit files for analysis
logit_dir_fl=list(pathlib.Path(src_logit_dir).rglob('*.npy'))
#Substirng to filter for masks
sub_str_chk=['/masks/','/t1dual_inphase/']
lr_rates=['lr_0.001','lr_0.0003','lr_0.01','lr_0.1']
loss_type=['focal','dice']
from sklearn.metrics import precision_recall_fscore_support
#Class dictionaries for anlysis
org_idx={'l_kidney':0,'liver':1,'r_kidney':2,'spleen':3,'background':-1}
cls_dict = {'background':0,'liver':63,'r_kidney':126,'l_kidney':189,'spleen':252}
cls_int_inv_dict={org_idx[k]:v for k,v in cls_dict.items()}
classes=['l_kidney','liver','r_kidney','spleen']
class_values = [cls_dict[cls.lower()] for cls in classes]
#Getting mask files for analysis
mask_raw_fl=list(pathlib.Path(src_mask_dir).rglob('*.png'))
#Creating basename dictionary for file list
logit_dir_dict={os.path.splitext(os.path.basename(x))[0]:x for x in logit_dir_fl if str(x).find(loss_type[0])!=-1}
#Finding only t1dual images with masks for analysis
mask_dir_fl=[x for x in mask_raw_fl if all(str(x).find(y)!=-1 for y in sub_str_chk)]
#Creating basename file name dictionary for string matching.
bs_nm_msk_dict_pth={os.path.splitext(os.path.basename(x))[0]:x for x in mask_dir_fl}
cd /home/ec2-user/SageMaker/Masters-Thesis-UNet-repository/jupyter_notebooks
final_src_dst_dict={}
for k,v in logit_dir_dict.items():
k_mask_str=k.replace('predict_logit_','')
#Getting final mask directory and logit directory together to run analysis against one another
try:
final_src_dst_dict[v]=bs_nm_msk_dict_pth[k_mask_str]
except KeyError as e:
print('key not found for:',k_mask_str)
final_src_dst_dict
###Output
_____no_output_____
###Markdown
Generating F1 score for analysis
###Code
final_lst=[]
for logit_pth,mask_file_pth in final_src_dst_dict.items():
#Get arrays loaded up
logit_arr=np.load(logit_pth)
logit_arr=logit_binarize(logit_arr)
# read data
mask = imageio.imread(mask_file_pth)
mask=resize_img_PIL(mask)
mask=gen_binary_mask(mask,class_values)
for org_nm,idx in org_idx.items():
if np.sum((mask[:,:,idx],logit_arr[:,:,idx]))!=0:
#print('mask_value',np.sum(mask[:,:,idx]))
#print('logit_value',np.sum(logit_arr[:,:,idx]))
tmp_prec,tmp_recall,tmp_f1,tmp_support=precision_recall_fscore_support(mask[:,:,idx].flatten(),
logit_arr[:,:,idx].flatten(),
average='binary')
tmp_tp,tmp_fp,tmp_tn,tmp_fn=gen_tp_fp_fp_fn(mask[:,:,idx].flatten(),logit_arr[:,:,idx].flatten())
else:
tmp_prec,tmp_recall,tmp_f1,tmp_support=('NaN','NaN','NaN','NaN')
tmp_tp,tmp_fp,tmp_tn,tmp_fn=('NaN','NaN','NaN','NaN')
#Get temporary statical dictionary
tmp_stat_dict=gen_pred_row(mask_file_pth,logit_pth,org_nm,
tmp_prec,tmp_recall,tmp_f1,tmp_support,
tmp_tp,tmp_fp,tmp_tn,tmp_fn,loss_type='focal',lr_rate=0.001,samp_sz=500)
final_lst.append(tmp_stat_dict)
final_df=pd.DataFrame(final_lst)
final_df.to_excel('unet_focal_loss_f1_score_per_class_500_imgs_data.xlsx')
import pickle
with open('/home/ec2-user/SageMaker/data/per_pat_gnrl_info/per_pat_slc_no.pickle','rb') as fb:
per_pat_per_slc_dict=pickle.load(fb)
per_pat_per_slc_dict={int(k):int(v) for k,v in per_pat_per_slc_dict.items()}
def gen_perc_slc_grad(pat_no,slc_no,slc_dict):
try:
total_no_slcs=slc_dict[pat_no]
except KeyError as e:
ipdb.set_trace()
return slc_no/total_no_slcs
final_df['slice_no']=final_df.patient.str.split('_',expand=True)[7]
final_df['pat_id']=final_df.patient.str.extract('(\d+)')[0]
cols_num_conv=['pat_id','false_positive','false_negative','true_positive','true_negative','pat_id','slice_no',
'precision', 'recall','F1_score']
final_df[cols_num_conv] = final_df[cols_num_conv].apply(pd.to_numeric, errors='coerce')
final_df['perc_slice_no'] = final_df.apply(lambda x: gen_perc_slc_grad(x.pat_id,
x.slice_no,
per_pat_per_slc_dict), axis=1)
#slice_test df results
final_df['perc_slice_no'] = final_df['perc_slice_no'].apply(pd.to_numeric, errors='coerce')
final_df_test=final_df[final_df.pat_id.isin([2,3,8,32,39])]
final_df[final_df.F1_score==0].groupby(['organ_type'])['false_negative'].sum()
final_df_test.organ_type.unique()
pwd
final_df[final_df.F1_score==0].groupby(['organ_type'])['false_negative'].sum()
#fig,axs=plt.subplots(figsize=(20,20))
#sns.set(font_scale=1.1)
org_str='spleen'
g=sns.jointplot(data=final_df_test[(final_df_test.organ_type==org_str)],
x='perc_slice_no',y='F1_score',kind='kde',ylim=(0,1),xlim=(0,1))
g.savefig('u_net_focal_loss_lr_0.001_samp_sz_50_'+org_str+'_f1score_wrt_per_slice_no_t1dual.jpeg')
###Output
_____no_output_____
###Markdown
Kidney specific scripting for dice scores
###Code
kidney_mask_concat=merge_arrs(mask[:,:,0],mask[:,:,2])
kidney_logit_concat=merge_arrs(logit_arr[:,:,0],logit_arr[:,:,2])
tmp_prec,tmp_recall,tmp_f1,tmp_support=precision_recall_fscore_support(kidney_mask_concat.flatten(),
kidney_logit_concat.flatten(),
average='binary')
tmp_stat_dict=gen_pred_row(mask_file_pth,logit_pth,'both_kidneys',
tmp_prec,tmp_recall,tmp_f1,tmp_support)
final_lst.append(tmp_stat_dict)
#Right kidney predicting left kidney
tmp_prec,tmp_recall,tmp_f1,tmp_support=precision_recall_fscore_support(mask[:,:,0].flatten(),
logit_arr[:,:,2].flatten(),
average='binary')
tmp_stat_dict=gen_pred_row(mask_file_pth,logit_pth,'right_kidney_pred_left',
tmp_prec,tmp_recall,tmp_f1,tmp_support)
final_lst.append(tmp_stat_dict)
#Left kidney predicting right kidney
tmp_prec,tmp_recall,tmp_f1,tmp_support=precision_recall_fscore_support(mask[:,:,2].flatten(),
logit_arr[:,:,0].flatten(),
average='binary')
final_df.to_csv('unet_dice_lr_0.0003_epch_69_per_organ_prec_recall_f1score.csv')
final_df[['F1_score','precision','recall']]=final_df[['F1_score','precision','recall']].apply(pd.to_numeric,
errors='coerce')
final_df_agg=final_df.groupby('organ_type')[['F1_score','precision','recall']].mean()
final_df_agg.columns=['Dice_score','Precision','Recall']
final_df_agg.to_csv('aggregate_unet_dice_lr_0.0003_epch_69__dice_prec_recall_table.csv')
final_df_agg
###Output
_____no_output_____
###Markdown
Generating logit to actual image predictions side by side.
###Code
#'SegCaps_multilabels_2019-11-28_11-24-37
file_name='SegCaps_multilabels_2019-11-09_01-25-53'
src_logit_dir=os.path.join('/home/ec2-user/SageMaker/data/seg_caps_predict_logits',file_name)
src_mask_dir='/home/ec2-user/SageMaker/data/500_imgs/'
dst_path=os.path.join('/home/ec2-user/SageMaker/results_segcaps_predict_imgs',file_name)
#Getting the logit files for analysis
logit_dir_fl=list(pathlib.Path(src_logit_dir).rglob('*.mha'))
#Substirng to filter for masks
sub_str_chk_mask=['/masks/','/t1dual_inphase/']
#Substirng to filter for images
sub_str_chk_img=['/images/','/t1dual_inphase/']
#learning rate and model names to filter logits
lr_rates=['lr_0.001','lr_0.0003','lr_0.01','lr_0.1']
segcap_model=['SegCaps_multilabels_2019-11-09_01-25-53','SegCaps_multilabels_2019-11-27_20-02-58',
'SegCaps_multilabels_2019-11-28_11-24-37']
#Class dictionaries for anlysis
org_idx_unet={'l_kidney':0,'liver':1,'r_kidney':2,'spleen':3,'background':4}
org_idx_segcaps={'l_kidney':2,'liver':1,'r_kidney':3,'spleen':4,'background':0}
cls_dict = {'background':0,'liver':63,'r_kidney':126,'l_kidney':189,'spleen':252}
#cls int dict defined for fdining maximum arrays
cls_int_inv_dict_unet={org_idx_unet[k]:v for k,v in cls_dict.items()}
cls_int_inv_dict_segcaps={org_idx_segcaps[k]:v for k,v in cls_dict.items()}
classes=['l_kidney','liver','r_kidney','spleen']
class_values = [cls_dict[cls.lower()] for cls in classes]
#Getting mask files for analysis
mask_raw_fl=list(pathlib.Path(src_mask_dir).rglob('*.png'))
#Creating basename dictionary for file list
logit_dir_dict={os.path.splitext(os.path.basename(x))[0]:x for x in logit_dir_fl if str(x).find(file_name)!=-1}
#Finding only t1dual images with masks for analysis
mask_dir_fl=[x for x in mask_raw_fl if all(str(x).find(y)!=-1 for y in sub_str_chk_mask)]
#Finding image substring match
img_dir_fl=[x for x in mask_raw_fl if all(str(x).find(y)!=-1 for y in sub_str_chk_img)]
#Creating basename file name dictionary for string matching.
bs_nm_msk_dict_pth={os.path.splitext(os.path.basename(x))[0]:x for x in mask_dir_fl}
bs_nm_img_dict_pth={os.path.basename(x):x for x in img_dir_fl}
final_src_dst_dict={}
for k,v in logit_dir_dict.items():
k_mask_str=k.replace('_prediction','')
#Getting final mask directory and logit directory together to run analysis against one another
try:
final_src_dst_dict[v]=bs_nm_msk_dict_pth[k_mask_str]
except KeyError as e:
print('key not found for:',k_mask_str)
plt.imshow(logit_arr[:,:,0])
plt.hist(logit_arr[:,:,0].flatten())
plt.xlabel('probability map value')
plt.ylabel('occurences')
concat_arr_logit_background=None
for logit_pth,mask_file_pth in final_src_dst_dict.items():
#Get arrays loaded up
logit_arr,_=load(str(logit_pth))
back_arr=logit_arr[:,:,0].flatten()
if concat_arr_logit_background is None:
concat_arr_logit_background=back_arr
else:
concat_arr_logit_background=np.concatenate((concat_arr_logit_background,back_arr))
plt.hist(concat_arr_logit_background,density=True,bins=20)
plt.xticks(np.arange(0, 1, step=0.05),rotation=45)
plt.xlabel('probability map value')
plt.ylabel('occurences')
final_src_dst_dict
#Index slicing dictionary for visualisation. first index in tuple is logit 2ns index is mask.
mask_logit_idx_slc_segcaps={'background':(0,4),'l_kidney':(3,0),'r_kidney':(2,2),'liver':(1,1),'spleen':(4,3)}
mask_logit_idx_slc_unet={'background':(4,4),'l_kidney':(0,0),'r_kidney':(2,2),'liver':(1,1),'spleen':(3,3)}
final_lst=[]
for logit_pth,mask_file_pth in final_src_dst_dict.items():
#Get arrays loaded up
logit_arr,_=load(str(logit_pth))
break
logit_arr=logit_binarize(logit_arr)
#logit_arr=reset_logit_int(logit_arr,cls_int_inv_dict_segcaps)
pr_mask=np.rot90(logit_arr,3)
#pr_mask_arr=logit_binarize(np.array(rot_img))
#tmp_img_path
img_nm=os.path.basename(mask_file_pth)
tmp_img=imageio.imread(bs_nm_img_dict_pth[img_nm])
dst_img_path=os.path.join(dst_path,'binary_predict_'+img_nm)
# read data
mask = imageio.imread(mask_file_pth)
mask=resize_img_PIL(mask)
gt_mask=gen_binary_mask(mask,class_values)
visualize(dst_img_path,
image=tmp_img,
gt_mask_l_kidney=gt_mask[:,:,mask_logit_idx_slc_segcaps['l_kidney'][1]],
pr_mask_l_kidney=pr_mask[:,:,mask_logit_idx_slc_segcaps['l_kidney'][0]],
gt_mask_liver=gt_mask[:,:,mask_logit_idx_slc_segcaps['liver'][1]],
pr_mask_liver=pr_mask[:,:,mask_logit_idx_slc_segcaps['liver'][0]],
gt_mask_r_kidney=gt_mask[:,:,mask_logit_idx_slc_segcaps['r_kidney'][1]],
pr_mask_r_kidney=pr_mask[:,:,mask_logit_idx_slc_segcaps['r_kidney'][0]],
gt_mask_spleen=gt_mask[:,:,mask_logit_idx_slc_segcaps['spleen'][1]],
pr_mask_spleen=pr_mask[:,:,mask_logit_idx_slc_segcaps['spleen'][0]],
gt_mask_background=gt_mask[:,:,mask_logit_idx_slc_segcaps['background'][1]],
pr_mask_background=pr_mask[:,:,mask_logit_idx_slc_segcaps['background'][0]],
)
for org_nm,idx in org_idx_segcaps.items():
if np.sum((gt_mask[:,:,idx],pr_mask[:,:,idx]))!=0:
#print('mask_value',np.sum(mask[:,:,idx]))
#print('logit_value',np.sum(logit_arr[:,:,idx]))
tmp_prec,tmp_recall,tmp_f1,tmp_support=precision_recall_fscore_support(gt_mask[:,:,idx].flatten(),
pr_mask[:,:,idx].flatten(),
average='binary')
tmp_tp,tmp_fp,tmp_tn,tmp_fn=gen_tp_fp_fp_fn(gt_mask[:,:,idx].flatten(),pr_mask[:,:,idx].flatten())
else:
tmp_prec,tmp_recall,tmp_f1,tmp_support=('NaN','NaN','NaN','NaN')
tmp_tp,tmp_fp,tmp_tn,tmp_fn=('NaN','NaN','NaN','NaN')
#Get temporary statical dictionary
tmp_stat_dict=gen_pred_row(mask_file_pth,logit_pth,org_nm,
tmp_prec,tmp_recall,tmp_f1,tmp_support,
tmp_tp,tmp_fp,tmp_tn,tmp_fn,loss_type='WCE',lr_rate=0.1,samp_sz=250)
final_lst.append(tmp_stat_dict)
final_df_segcaps=pd.DataFrame(final_lst)
final_df_segcaps.to_csv(os.path.join(dst_path,file_name+'df_f1score_re_prec_df.csv'))
###Output
_____no_output_____
###Markdown
Confusion matrix generation
###Code
from sklearn.metrics import precision_recall_fscore_support
y_true_arr=None
y_pred_arr=None
for logit_pth,mask_file_pth in final_src_dst_dict.items():
#Get arrays loaded up
logit_arr=np.load(logit_pth)
logit_arr=logit_binarize(logit_arr)
logit_arr=reset_logit_int(logit_arr,cls_int_inv_dict)
logit_arr=comp_logit(logit_arr)
y_pred_arr=concat_flat_arr(logit_arr.flatten(),y_pred_arr)
# read data
mask = imageio.imread(mask_file_pth)
mask=resize_img_PIL(mask)
y_true_arr=concat_flat_arr(mask.flatten(),y_true_arr)
#Generating temporary confusion matrix
tmp_conf_mat=ConfusionMatrix(y_true_arr,y_pred_arr)
tmp_conf_mat_df=tmp_conf_mat.to_dataframe()
#rename analysis
tmp_conf_mat_df=tmp_conf_mat.to_dataframe()
tmp_conf_mat_df.rename({0.0:'Background',
63:'liver',
126:'r_kidney',
189:'l_kidney',
252:'spleen'},axis=0,inplace=True)
tmp_conf_mat_df.rename({0.0:'Background',
63:'liver',
126:'r_kidney',
189:'l_kidney',
252:'spleen'},axis=1,inplace=True)
tmp_conf_mat_df.to_csv('unet_focal_loss_lr_0.001_epch_no99_samp_sz_500_conf_mat.csv')
tmp_conf_mat_df
pwd
def concat_flat_arr(arr:np.ndarray,concat_arr)->np.ndarray:
"""The purpose of this method is to concat an array together with an arra yor none type
primarily this function is used as an aggregatoin function at the end of a for loop. """
if concat_arr is None:
return arr
else:
#ipdb.set_trace()
concat_arr=np.concatenate((concat_arr,arr))
return concat_arr
def reset_logit_int(logit:np.ndarray,cls_lbl_dict)->np.ndarray:
"""The purpose of this method is to compress a logit down into a single layer array for analysis """
for k,v in cls_lbl_dict.items():
logit[:,:,k]=np.where(logit[:,:,k]==1,v,0)
return logit
def comp_logit(logit:np.ndarray)->np.ndarray:
return np.amax(logit,axis=2)
def gen_pred_row(mask_file_pth:str,logit_pth:str,
org_nm:str,tmp_prec,tmp_recall,tmp_f1,
tmp_support,tmp_tp,tmp_fp,tmp_tn,tmp_fn,
loss_type=None,lr_rate=None,samp_sz=None)->dict:
final_dict={'patient':os.path.splitext(os.path.basename(mask_file_pth))[0],'samp_sz':samp_sz,
'loss':loss_type,
'learn_rate':lr_rate,
'organ_type':org_nm,
'precision':tmp_prec,'recall':tmp_recall,'F1_score':tmp_f1,
'support_no':tmp_support,'true_positive':tmp_tp,'false_positive':tmp_fp,
'true_negative':tmp_tn,'false_negative':tmp_fn}
if loss_type is None:
final_dict['loss']=[x for x in loss_type if str(logit_pth).find(x)!=-1][0]
if lr_rate is None:
final_dict['learn_rate']=[x for x in lr_rates if str(logit_pth).find(x)!=-1][0]
return final_dict
def gen_tp_fp_fp_fn(y_true,y_pred):
#Y true y predict true positive and negatives
pos_y_true=(y_true==1)
pos_y_pred=(y_pred==1)
neg_y_true=(y_true==0)
neg_y_pred=(y_pred==0)
#Generating false positive values
true_pos=len(np.where(pos_y_true&pos_y_pred)[0])
false_pos=len(np.where(pos_y_pred&neg_y_true)[0])
#Generating true negatives and false negatives
true_neg=len(np.where(neg_y_true&neg_y_pred)[0])
false_neg=len(np.where(neg_y_pred&pos_y_true)[0])
return true_pos,false_pos,true_neg,false_neg
def gen_binary_mask(mask:np.ndarray,class_values:list,reord_stack=None)->np.ndarray:
# extract certain classes from mask (e.g. cars)
masks = [(mask == v) for v in class_values]
mask = np.stack(masks, axis=-1).astype('float')
# add background if mask is not binary
if mask.shape[-1] != 1:
background = 1 - mask.sum(axis=-1, keepdims=True)
mask = np.concatenate((mask, background), axis=-1)
if reord_stack is None:
return mask
else:
mask=np.transpose(mask,reord_stack)
def merge_arrs(arr1,arr2):
return np.where(arr2>0,1,arr1)
###Output
_____no_output_____ |
Lectures/Lecture10_AdvancedTopics/notebook.ipynb | ###Markdown
Introduction to CNN (Convolutional Neural Network) May 1 2021 Hosted by and maintained by the [Student Association for Applied Statistics (SAAS)](https://saas.berkeley.edu).Created by Chinmay Gharpure, Zoe Liu, Ritvik Iyer, Jessica Wang, Harry Dong, Derek Cai, Matt Moon Table of Contents1. [What is a Convolutional Neural Net?](cnn_intro) 1. [Why use CNNs over MLPs?](cnn_mlp)2. [CNN Layers](layers) 1. [Convolutional layers](cnn_layers) 2. [Pooling layers](pooling_layers)3. [Key terms](key_terms)4. [Convolution Demo](demo)5. [Example of CNN Architecture](cnn_arch)6. [Classifying MNIST with CNN](mnist)7. [Summary](summary) What is a Convolutional Neural Net? One important class of neural net architectures is convolution neural networks. Convolutional neural networks, or CNNs, are a type of neural network that can take in an input image, recognize particular patterns in the image like edge locations, and use the differences in patterns to differentiate images from each other. This architecture is very popular in problems involving images, such as object recognition and image classification, although there are many other applications too. We will primarily use image inputs as the running example. Why use CNNs over MLPs?If we consider each pixel to be a feature, we can consider an image to be a matrix of numeric values. In that case, **why can't we flatten the matrix and pass it into a multi-layer perceptron to perform tasks involving images?**The answer is that **regular neural networks like MLPs don't scale well with image tasks**. Let's consider the example of a 32x32x3 image (32 pixels wide, 32 pixels high, and 3 color channels). In this case, we would need to find 32 x 32 x 3 = 3072 weights. However, if we would like to use high quality images, the number of weights we would need to find quickly balloons in value. This makes normal neural networks extremely weighty and computationally expensive. In addition, patterns in images are usually not recognizable at a pixel level of granuarity. When you look at a a picture of a dog, you likely recognize it as a dog not by looking at each individual hair, but by looking at the placement and shape of its eyes, nose, and mouth. Similarly, CNNs help us capture more general patterns in images like edges, textures, and visual patterns. CNN Layers Just like multi-layer perceptrons, convolutional neural networks are made up of layers that do specialized processing on the input. Today, we'll focus on two of the most important layer types found in CNNs: **convolutional layers** and **pooling layers**. ChannelsAn important concept we need to understand before we talk about the different layers is channels. Channels refer to the color channels present in the image. In a normal color image, there are three color channels present: red, green, and blue. However, images don't have to be constrained to RBG. For example, CMYK images that have four color channels: cyan, magenta, yellow, and black. A color image can be represented as a matrix of dimensions $width×height×\channels$, and we convolve over each channel separately. As we go deeper into the network and the number of channels increase, the channels become more abstract and are hard to place a color on each. As such, it may be helpful to think of images as 3-D matrices, also known as tensors, and as we go through each layer of the network, we are operating on this tensor to create new tensors of possibly different shapes and sizes. Basically, try not to think about colors when dealing with CNNs. Convolutions, Filters, and Convolutional LayersA **convolution** operation is an element wise matrix multiplication operation, where one of the matrices is the image, and the other is the **filter** (also called kernel or feature detector) that turns the image into something else. A filter is a learnable matrix that we "slide", or convolve, across the image. As we do so, perform the dot product between the image and the filter. The output of this is the final convoluted image.+ input (\[input height] x \[input width] x \[input channels])+ filters (\[filter height] x \[filter width] x \[input channels] x \[output channels])+ output (\[output height] x \[output width] x \[output channels])In the example below, the filter is the yellow sliding window and its value is $$\begin{bmatrix} 1 & 0 & 1 \\ 0 & 1 & 0 \\ 1 & 0 & 1 \end{bmatrix}$$.**Convolutional layers**, which consist of convolution operations performed on the entire image, make up one. ofthe most important parts of a CNN architecture ![Picture title](Convolution_schematic.gif) Pooling LayersFor fully connected networks discussed in the previous lecture, we had nonlinearities between each fully connected layer. Similarly, convolutional layers are linear operations, so we should also have nonlinearities between them as well. These nonlinearity layers for CNNs are known as pooling layers. It operates similarly to a convolutional layer in that we slide a window like in the previous example and we apply a function to that window to output a scalar. A couple popular choices are max pooling (where we output the max value in that window) and average pooling (where we output the average of all values in the window). We do this for each channel, so the number of channels is preserved.+ input (\[input height] x \[input width] x \[input channels])+ pooling window (\[window height] x \[window width])+ output (\[output height] x \[output width] x \[input channels])Here is an example of max pooling:![Picture title](image-20210427-191403.png) More Key Terms for CNN In order to comprehend convolutional neural nets, we also need to understand some common terms. PaddingYou may want to add borders to your inputs, typically some constant such as 0. The purpose of this is to prevent our outputs from shrinking too much after each layer. For instance you could 0-pad a 5x5x3 image by width 2, which would result in the same image but bordered by 2 layers of 0's (a 9x9x3 image). Another example is shown below. ![Picture title](image-20210425-132850.png) StrideInstead of sliding the window in convolutional and pooling layers one pixel at a time, we can choose to apply the filter or pooling function after each stride of length $k$. This applies to both columns and rows. With strides, you can greatly reduce the amount of computation at this layer, but taking too big of a stride will irrecoverably lose a lot of crucial information. As an example of strides, applying a 3x3x1x1 filter to a 5x5x1 image would typically get us a 3x3x1 output, but with a stride of 2, the output would be a 2x2x1 since we take two steps each time we apply a convolution/pooling. Another example is shown below.![Picture title](image-20210425-132523.png) DilationWe can add spaces between each element in the filter. This allows the filter to have a more global view of the picture.![Picture title](image-20210425-132416.png) CNN Convolution DemoLet's take a closer look at how convolutions are performed on a simple 7x7x3 image with two filters and padding: https://cs231n.github.io/convolutional-networks/ Example of CNN Architecture ![Picture title](image-20210328-165804.png) Here, we start with an 36x36 RGB image. Next, we apply filters of dimensions 11x11x3x9 (recall \[filter height] x \[filter width] x \[input channels] x \[output channels]). The result output is 26x26x9 (recall \[output height] x \[output width] x \[output channels]). Notice how the height and width of the output is smaller than the input. Can you reason why? **Hint**: Look at the simple single channel example from before and see how we go from a 5x5 to a 3x3.From a 26x26x9, we get a 12x12x9 after taking a max pool of stride 2. Can you find the stride lengths of convolutional layer 2 and max pooling layer 2, assuming no padding and dilations?**Hint**: It may be helpful to draw it out.Once we have a small enough dimensions (in this case it's a 2x2x3), we can flatten it into a vector (in this case it's a 12x1 vector) which we can feed into a fully connected network. Example of a More Complex CNNBelow, we have the architecture of the VGG model. It takes in a 224x244 RGB input image and returns the predicted image class among 1000 classes. VGG and similarly complex models are exceptionally expensive to train, so do not try to build this and run it unprepared--your computer will be very sad. See if you can understand what this model does. ![Picture title](image-20210427-192650.png) Interactive VisualizationLet's look at an interactive visualization of a CNN classifying numbers for a handwritten digit. Visualization: https://www.cs.ryerson.ca/~aharley/vis/conv/ MNIST example in PytorchHere, we will implement a CNN to classify handwritten digits from the [MNIST dataset](https://keras.io/api/datasets/mnist/) of grayscale images.
###Code
!wget www.di.ens.fr/~lelarge/MNIST.tar.gz
!tar -zxvf MNIST.tar.gz
import torch
import torchvision
from torchvision import datasets, transforms
from torchvision.datasets import MNIST
# Set some parameters
n_epochs = 3
batch_size_train = 64
batch_size_test = 1000
learning_rate = 0.01
momentum = 0.5
log_interval = 10
random_seed = 1
torch.backends.cudnn.enabled = False
torch.manual_seed(random_seed)
# Define some preprocessing steps
transform = transforms.Compose([transforms.ToTensor(), transforms.Normalize((0.1307,), (0.3081,))])
train_loader = torch.utils.data.DataLoader(MNIST(root = './', train=True, download=True, transform=transform),batch_size=batch_size_train)
test_loader = torch.utils.data.DataLoader(MNIST(root = './', train=False, download=True, transform=transform),batch_size=batch_size_test)
###Output
--2021-04-28 02:17:43-- http://www.di.ens.fr/~lelarge/MNIST.tar.gz
Resolving www.di.ens.fr (www.di.ens.fr)... 129.199.99.14
Connecting to www.di.ens.fr (www.di.ens.fr)|129.199.99.14|:80... connected.
HTTP request sent, awaiting response... 302 Found
Location: https://www.di.ens.fr/~lelarge/MNIST.tar.gz [following]
--2021-04-28 02:17:43-- https://www.di.ens.fr/~lelarge/MNIST.tar.gz
Connecting to www.di.ens.fr (www.di.ens.fr)|129.199.99.14|:443... connected.
HTTP request sent, awaiting response... 200 OK
Length: unspecified [application/x-gzip]
Saving to: ‘MNIST.tar.gz.8’
MNIST.tar.gz.8 [ <=> ] 33.20M 13.3MB/s in 2.5s
2021-04-28 02:17:46 (13.3 MB/s) - ‘MNIST.tar.gz.8’ saved [34813078]
MNIST/
MNIST/raw/
MNIST/raw/train-labels-idx1-ubyte
MNIST/raw/t10k-labels-idx1-ubyte.gz
MNIST/raw/t10k-labels-idx1-ubyte
MNIST/raw/t10k-images-idx3-ubyte.gz
MNIST/raw/train-images-idx3-ubyte
MNIST/raw/train-labels-idx1-ubyte.gz
MNIST/raw/t10k-images-idx3-ubyte
MNIST/raw/train-images-idx3-ubyte.gz
MNIST/processed/
MNIST/processed/training.pt
MNIST/processed/test.pt
###Markdown
Let's take a look at the MNIST dataset
###Code
examples = enumerate(test_loader)
batch_idx, (example_data, example_targets) = next(examples)
example_data.shape
###Output
_____no_output_____
###Markdown
**Question: Looking at the cell above, can you interpret those tensor dimensions?****Answer: **
###Code
import matplotlib.pyplot as plt
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Ground Truth: {}".format(example_targets[i]))
plt.xticks([])
plt.yticks([])
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
# Set the network architecture
class Net(nn.Module):
def __init__(self):
super(Net, self).__init__()
self.conv1 = nn.Conv2d(1, 10, kernel_size=5)
self.conv2 = nn.Conv2d(10, 20, kernel_size=5)
self.conv2_drop = nn.Dropout2d()
self.fc1 = nn.Linear(320, 50)
self.fc2 = nn.Linear(50, 10)
def forward(self, x):
x = F.relu(F.max_pool2d(self.conv1(x), 2))
x = F.relu(F.max_pool2d(self.conv2_drop(self.conv2(x)), 2))
x = x.view(-1, 320)
x = F.relu(self.fc1(x))
x = F.dropout(x, training=self.training)
x = self.fc2(x)
return F.log_softmax(x)
network = Net()
optimizer = optim.SGD(network.parameters(), lr=learning_rate,
momentum=momentum)
train_losses = []
train_counter = []
test_losses = []
test_counter = [i*len(train_loader.dataset) for i in range(n_epochs + 1)]
def train(epoch):
network.train()
for batch_idx, (data, target) in enumerate(train_loader):
optimizer.zero_grad()
output = network(data)
loss = F.nll_loss(output, target)
loss.backward()
optimizer.step()
if batch_idx % log_interval == 0:
print('Train Epoch: {} [{}/{} ({:.0f}%)]\tLoss: {:.6f}'.format(
epoch, batch_idx * len(data), len(train_loader.dataset),
100. * batch_idx / len(train_loader), loss.item()))
train_losses.append(loss.item())
train_counter.append(
(batch_idx*64) + ((epoch-1)*len(train_loader.dataset)))
def test():
network.eval()
test_loss = 0
correct = 0
with torch.no_grad():
for data, target in test_loader:
output = network(data)
test_loss += F.nll_loss(output, target, size_average=False).item()
pred = output.data.max(1, keepdim=True)[1]
correct += pred.eq(target.data.view_as(pred)).sum()
test_loss /= len(test_loader.dataset)
test_losses.append(test_loss)
print('\nTest set: Avg. loss: {:.4f}, Accuracy: {}/{} ({:.0f}%)\n'.format(
test_loss, correct, len(test_loader.dataset),
100. * correct / len(test_loader.dataset)))
test()
for epoch in range(1, n_epochs + 1):
train(epoch)
test()
fig = plt.figure()
plt.plot(train_counter, train_losses, color='blue')
plt.scatter(test_counter, test_losses, color='red')
plt.legend(['Train Loss', 'Test Loss'], loc='upper right')
plt.xlabel('Number of Training Examples')
plt.ylabel('Negative Log Likelihood')
with torch.no_grad():
output = network(example_data)
fig = plt.figure()
for i in range(6):
plt.subplot(2,3,i+1)
plt.tight_layout()
plt.imshow(example_data[i][0], cmap='gray', interpolation='none')
plt.title("Prediction: {}".format(
output.data.max(1, keepdim=True)[1][i].item()))
plt.xticks([])
plt.yticks([])
###Output
_____no_output_____ |
10. Old Project/Projects/Market Regimes/Функция ChangeFinder.ipynb | ###Markdown
Метод 2
###Code
import numpy as np
import matplotlib.pyplot as plt
import seaborn
def generate_normal_time_series(num, minl=50, maxl=1000):
data = np.array([], dtype=np.float64)
partition = np.random.randint(minl, maxl, num)
for p in partition:
mean = np.random.randn()*10
var = np.random.randn()*1
if var < 0:
var = var * -1
tdata = np.random.normal(mean, var, p)
data = np.concatenate((data, tdata))
return data
data = generate_normal_time_series(7, 50, 200)
fig, ax = plt.subplots(figsize=[16, 12])
ax.plot(data)
import cProfile
import bayesian_changepoint_detection.online_changepoint_detection as oncd
from functools import partial
R, maxes = oncd.online_changepoint_detection(df_data['close'][0:100].values, partial(oncd.constant_hazard, 250), oncd.StudentT(0.1, .01, 1, 0))
import matplotlib.cm as cm
fig, ax = plt.subplots(figsize=[18, 16])
ax = fig.add_subplot(3, 1, 1)
ax.plot(df_data['close'][0:100].values)
ax = fig.add_subplot(3, 1, 2, sharex=ax)
sparsity = 10 # only plot every fifth data for faster display
ax.pcolor(np.array(range(0, len(R[:,0]), sparsity)),
np.array(range(0, len(R[:,0]), sparsity)),
-np.log(R[0:-1:sparsity, 0:-1:sparsity]),
cmap=cm.Greys, vmin=0, vmax=30)
ax = fig.add_subplot(3, 1, 3, sharex=ax)
Nw=10;
ax.plot(R[Nw,Nw:-1])
###Output
/home/brainiac/anaconda3/lib/python3.7/site-packages/ipykernel_launcher.py:10: RuntimeWarning: divide by zero encountered in log
# Remove the CWD from sys.path while we load stuff.
|
LAB/Feature selection/7_Discriminative Feature Selection.ipynb | ###Markdown
**Discriminative Feature Selection** FEATURE SELECTIONFeature Selection is the process where you automatically or manually select those features which contribute most to your prediction variable or output in which you are interested in. Having irrelevant features in your data can decrease the accuracy of the models and make your model learn based on irrelevant features.We are going to understand it with a practice example. Steps are as follows :>1) Import important libraries>2) Importing data>3) Data Preprocessing>>i) Price>>ii) Size>>iii) Installs>4) Discriminative Feature Check>>i) Reviews>>ii) Price **1. Import Important Libraries**
###Code
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
#from google.colab import drive
#drive.mount('/content/drive')
###Output
_____no_output_____
###Markdown
**2. Importing Data**Today we will be working on a playstore apps dataset with ratings. Link to the dataset --> https://www.kaggle.com/lava18/google-play-store-apps/data
###Code
df = pd.read_csv('googleplaystore.csv',encoding='unicode_escape')
df.head()
###Output
_____no_output_____
###Markdown
**3. Data Preprocessing**Let us have a look at all the datatypes first :
###Code
df.dtypes
###Output
_____no_output_____
###Markdown
We see that all the columns except 'Rating' are object datatype. We want those columns also as numeric as they dont make sense when they are in object form.Let us start with the 'Price' column.**i) Price** When we saw the head of the dataset, we only see the 0 values in 'Price' column. Let us have a look at the rows with non zero data. As the 'Price column is object type, we compare the column with '0' instead of 0.
###Code
df[df['Price']!='0'].head()
###Output
_____no_output_____
###Markdown
We see that the 'Price' column has dollar sign in the beginning for the apps which are not free. Hence we cannot directly convert it to numeric type. We will first have to remove the $ sign so that all datas are uniform and can be converted.We use the replace function over here to replace the dollar sign by blank. Notice that we had to convert the column to string type from object type as the replace function is only applicable on string functions.
###Code
df['Price'] = df['Price'].str.replace('$','')
df[df['Price']!='0'].head()
###Output
<ipython-input-6-a2a650a36113>:1: FutureWarning: The default value of regex will change from True to False in a future version. In addition, single character regular expressions will*not* be treated as literal strings when regex=True.
df['Price'] = df['Price'].str.replace('$','')
###Markdown
**ii) Size**As we see the 'Size' column, we see that the value ends with the letter 'M' for mega. We want to convert the size to numeric value to use in the dataset. Hence we will need to remove the letter 'M'.For this, we convert the column to string and omit the last letter of the string and save the data in 'Size' column.Notice from the previous head that we saw, that the 'Size' for row 427 is given as varies with device. We obviously cannot convert such data to numeric. We will see how to deal with it later.
###Code
df['Size'] = df['Size'].str[:-1]
df.head()
###Output
_____no_output_____
###Markdown
**iii) Installs**If we see the 'Installs' column, there are 2 major changes that we need to make to convert it to numeric. We have to remove the '+' sign from the end of the data as well as remove the commas before converting to numeric.To remove the last letter, we apply the same procedure as for the 'Size' column :
###Code
df['Installs'] = df['Installs'].str[:-1]
df.head()
###Output
_____no_output_____
###Markdown
For the removal of commas, we will use the replace function to replace commas with blank.Replace function only works on string, hence we access the values of the series as string before applying the replace function :
###Code
df['Installs'] = df['Installs'].str.replace(',','')
df.head()
###Output
_____no_output_____
###Markdown
Now, we will finally convert all the data to numeric type using the to_numeric function. Notice that we have used the errors='coerce' parameter. This parameter converts all the data which cannot be converted to numeric into NaN. For example the 'Size' in row 427 cannot be converted to int. Hence it will be converted to NaN. After that we take a look at the datatypes of the columns again.
###Code
df['Reviews'] = pd.to_numeric(df['Reviews'],errors='coerce')
df['Size'] = pd.to_numeric(df['Size'],errors='coerce')
df['Installs'] = pd.to_numeric(df['Installs'],errors='coerce')
df['Price'] = pd.to_numeric(df['Price'],errors='coerce')
df.dtypes
###Output
_____no_output_____
###Markdown
Now we will see and work with all the NaN values. Let us first have a look at all the NaN values in the dataset :
###Code
df.isna().sum()
###Output
_____no_output_____
###Markdown
As rating is the output of our dataset, we cannot have that to be NaN. Hence we will remove all the rows with 'Rating' as NaN :
###Code
df = df[df['Rating'].isna()==False]
df.isna().sum()
###Output
_____no_output_____
###Markdown
This is the final preprocessed dataset that we obtained :
###Code
df.head()
###Output
_____no_output_____
###Markdown
**4. Discriminative Feature Check**Now we will move on to checking the discriminative feature checking, to see which feature is good and which is not. We will start with the 'Reviews' column. For our case, we will take rating > 4.3 as a good rating. We take that value because as we see in the following stats, the rating is divided 50:50 at that value.Before we do that, let us have a look at the statistics of the whole table :
###Code
df.describe()
###Output
_____no_output_____
###Markdown
**i) Reviews**We will have to check for multiple values that which of them has the best rating distinction. We will start by comparing with the mean of the 'Reviews' column which is 514098.We will use a new function over here known as crosstab. Crosstab allows us to have a frequency count across 2 columns or conditions.We could also normalize the column results to obtain the conditional probability of P(Rating = HIGH | condition)We have also turned on the margins to see the total frequency under that condition.
###Code
pd.crosstab(df['Rating']>4.3,df['Reviews']>514098,rownames=['Ratings>4.3'],colnames=['Reviews>514098'],margins= True)
###Output
_____no_output_____
###Markdown
We see that the number of ratings in the case of Reviews > 514098 is very less (close to 10%).Hence it is preferred to take the 50 percentile point rather than the mean to be the pivot point. Let us now take the 50 percentile point which is 5930 reviews in this case. So let us take a look at that :
###Code
pd.crosstab(df['Rating']>4.3,df['Reviews']>5930,rownames=['Ratings>4.3'],colnames=['Reviews>5930'],margins= True)
###Output
_____no_output_____
###Markdown
Now we see that the number of ratings is equal for both high and low reviews. So we will take the 50 percentile point to start from now on. Let us now look at the conditional probability :
###Code
pd.crosstab(df['Rating']>4.3,df['Reviews']>5930,rownames=['Ratings>4.3'],colnames=['Reviews>5930'],margins= True,normalize='columns')
###Output
_____no_output_____
###Markdown
There is not much difference between P(Ratings=HIGH|Reviews5930) so this is a bad feature.Let us increase the value of the pivot for ratings to 80000 and check again. We dont need to check for the percentage being too low as we are almost at 75 percentile mark.
###Code
pd.crosstab(df['Rating']>4.3,df['Reviews']>80000,rownames=['Ratings>4.3'],colnames=['Reviews>80000'],margins= True,normalize='columns')
###Output
_____no_output_____
###Markdown
Now we see that there is a good difference in the probabilities and hence Rating>80000 is a good feature. **ii) Price**We will do the same for 'Price' column to find out the best distinctive feature. We see that in this case, even the 75 percentile mark also points to 0. Hence in this case, we will classify the data as Free or not :
###Code
pd.crosstab(df['Rating']>4.3,df['Price']==0,rownames=['Ratings>4.3'],colnames=['Price=$0'],margins= True)
###Output
_____no_output_____
###Markdown
This shows us that it is very difficult to use the Price as a feature. Hence it is a doubtful feature. If then also we want to force this as a feature, let us see the conditional probability :
###Code
pd.crosstab(df['Rating']>4.3,df['Price']==0,rownames=['Ratings>4.3'],colnames=['Price=$0'],margins= True,normalize='columns')
###Output
_____no_output_____ |
notebooks/180807 - Oahu Qualification Residual Analysis.ipynb | ###Markdown
Load Data
###Code
persistence_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_persistence.csv")
#sarima_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_sarima.csv")
var_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_var.csv")
hofts_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_hofts.csv")
cvfts_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_cvfts.csv")
cmvfts_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_cmvfts.csv")
lstm_multi_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_lstm_multi.csv")
lstm_uni_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_lstm_uni.csv")
mlp_multi_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_mlp_multi.csv")
mlp_uni_ssa_results = pd.read_csv(results_path + "rolling_cv_oahu_residual_mlp_uni.csv")
RMSE_real = []
for i in cvfts_ssa_results.RMSE:
comp = complex(i)
RMSE_real.append(comp.real)
cvfts_ssa_results['RMSE'] = RMSE_real
U_real = []
for i in cvfts_ssa_results.U:
comp = complex(i)
U_real.append(comp.real)
cvfts_ssa_results['U'] = U_real
##TODO: confirmar porque 5 splits dao erros maiores em SARIMA e CMVFTS
sarima_ssa_results = sarima_ssa_results[sarima_ssa_results.RMSE < 500]
cmvfts_ssa_results = cmvfts_ssa_results[cmvfts_ssa_results.RMSE < 500]
def createBoxplot(filename, data, xticklabels, ylabel):
# Create a figure instance
fig = plt.figure(1, figsize=(9, 6))
# Create an axes instance
ax = fig.add_subplot(111)
# Create the boxplot
bp = ax.boxplot(data, patch_artist=True)
## change outline color, fill color and linewidth of the boxes
for box in bp['boxes']:
# change outline color
box.set( color='#7570b3', linewidth=2)
# change fill color
box.set( facecolor = '#1b9e77' )
## change color and linewidth of the whiskers
for whisker in bp['whiskers']:
whisker.set(color='#7570b3', linewidth=2)
## change color and linewidth of the caps
for cap in bp['caps']:
cap.set(color='#7570b3', linewidth=2)
## change color and linewidth of the medians
for median in bp['medians']:
median.set(color='#b2df8a', linewidth=2)
## change the style of fliers and their fill
for flier in bp['fliers']:
flier.set(marker='o', color='#e7298a', alpha=0.5)
## Custom x-axis labels
ax.set_xticklabels(xticklabels)
ax.set_ylabel(ylabel)
plt.show()
fig.savefig(filename, bbox_inches='tight')
###Output
_____no_output_____
###Markdown
Boxplot OAHU Residual Multivariate
###Code
metric = 'RMSE'
multi_data = [persistence_ssa_results[metric], var_ssa_results[metric], cmvfts_ssa_results[metric], lstm_multi_ssa_results[metric], mlp_multi_ssa_results[metric]]
xticks = ['Persistence','VAR','CMVFTS','LSTM_MULTI','MLP_MULTI']
ylab = 'RMSE'
createBoxplot("boxplot_rmse_oahu_residual_multi", multi_data, xticks, ylab)
metric = 'SMAPE'
multi_data = [persistence_ssa_results[metric], var_ssa_results[metric], cmvfts_ssa_results[metric], lstm_multi_ssa_results[metric], mlp_multi_ssa_results[metric]]
xticks = ['Persistence','VAR','CMVFTS','LSTM_MULTI','MLP_MULTI']
ylab = 'SMAPE'
createBoxplot("boxplot_smape_oahu_residual_multi", multi_data, xticks, ylab)
metric = 'U'
multi_data = [persistence_ssa_results[metric], var_ssa_results[metric], cmvfts_ssa_results[metric], lstm_multi_ssa_results[metric], mlp_multi_ssa_results[metric]]
xticks = ['Persistence','VAR','CMVFTS','LSTM_MULTI','MLP_MULTI']
ylab = 'U Statistic'
createBoxplot("boxplot_u_oahu_residual_multi", multi_data, xticks, ylab)
###Output
_____no_output_____
###Markdown
Improvement table Multivariate
###Code
def improvement(metric_model, metric_persistence):
return (1 - (np.mean(metric_model) / np.mean(metric_persistence)))
index = ['Persistence','VAR','CMVFTS','LSTM_MULTI','MLP_MULTI']
columns = ['imp(RMSE)', 'imp(SMAPE)', 'imp(U)']
imp_df = pd.DataFrame(columns=columns, index=index)
metric = 'RMSE'
imp_prst = improvement(persistence_ssa_results[metric], persistence_ssa_results[metric])
imp_var = improvement(var_ssa_results[metric], persistence_ssa_results[metric])
imp_cmvfts = improvement(cmvfts_ssa_results[metric], persistence_ssa_results[metric])
imp_lstm_multi = improvement(lstm_multi_ssa_results[metric], persistence_ssa_results[metric])
imp_mlp_multi = improvement(mlp_multi_ssa_results[metric], persistence_ssa_results[metric])
imp_df['imp('+metric+')'] = [imp_prst, imp_var, imp_cmvfts, imp_lstm_multi, imp_mlp_multi]
metric = 'SMAPE'
imp_prst = improvement(persistence_ssa_results[metric], persistence_ssa_results[metric])
imp_var = improvement(var_ssa_results[metric], persistence_ssa_results[metric])
imp_cmvfts = improvement(cmvfts_ssa_results[metric], persistence_ssa_results[metric])
imp_lstm_multi = improvement(lstm_multi_ssa_results[metric], persistence_ssa_results[metric])
imp_mlp_multi = improvement(mlp_multi_ssa_results[metric], persistence_ssa_results[metric])
imp_df['imp('+metric+')'] = [imp_prst, imp_var, imp_cmvfts, imp_lstm_multi, imp_mlp_multi]
metric = 'U'
imp_prst = improvement(persistence_ssa_results[metric], persistence_ssa_results[metric])
imp_var = improvement(var_ssa_results[metric], persistence_ssa_results[metric])
imp_cmvfts = improvement(cmvfts_ssa_results[metric], persistence_ssa_results[metric])
imp_lstm_multi = improvement(lstm_multi_ssa_results[metric], persistence_ssa_results[metric])
imp_mlp_multi = improvement(mlp_multi_ssa_results[metric], persistence_ssa_results[metric])
imp_df['imp('+metric+')'] = [imp_prst, imp_var, imp_cmvfts, imp_lstm_multi, imp_mlp_multi]
print(imp_df.to_latex())
###Output
_____no_output_____
###Markdown
Boxplot OAHU Residual Univariate
###Code
metric = 'RMSE'
#uni_data = [persistence_ssa_results[metric], sarima_ssa_results[metric], hofts_ssa_results[metric], cvfts_ssa_results[metric], lstm_uni_ssa_results[metric], mlp_uni_ssa_results[metric]]
#xticks = ['Persistence', 'SARIMA', 'HOFTS','CVFTS','LSTM_UNI','MLP_UNI']
uni_data = [persistence_ssa_results[metric], hofts_ssa_results[metric], cvfts_ssa_results[metric], lstm_uni_ssa_results[metric], mlp_uni_ssa_results[metric]]
xticks = ['Persistence', 'HOFTS','CVFTS','LSTM_UNI','MLP_UNI']
ylab = 'RMSE'
createBoxplot("boxplot_rmse_oahu_residual_uni", uni_data, xticks, ylab)
metric = 'SMAPE'
#uni_data = [persistence_ssa_results[metric], sarima_ssa_results[metric], hofts_ssa_results[metric], cvfts_ssa_results[metric], lstm_uni_ssa_results[metric], mlp_uni_ssa_results[metric]]
#xticks = ['Persistence', 'SARIMA', 'HOFTS','CVFTS','LSTM_UNI','MLP_UNI']
uni_data = [persistence_ssa_results[metric], hofts_ssa_results[metric], cvfts_ssa_results[metric], lstm_uni_ssa_results[metric], mlp_uni_ssa_results[metric]]
xticks = ['Persistence', 'HOFTS','CVFTS','LSTM_UNI','MLP_UNI']
ylab = 'SMAPE'
createBoxplot("boxplot_smape_oahu_residual_uni", uni_data, xticks, ylab)
metric = 'U'
#uni_data = [persistence_ssa_results[metric], sarima_ssa_results[metric], hofts_ssa_results[metric], cvfts_ssa_results[metric], lstm_uni_ssa_results[metric], mlp_uni_ssa_results[metric]]
#xticks = ['Persistence', 'SARIMA', 'HOFTS','CVFTS','LSTM_UNI','MLP_UNI']
uni_data = [persistence_ssa_results[metric], hofts_ssa_results[metric], cvfts_ssa_results[metric], lstm_uni_ssa_results[metric], mlp_uni_ssa_results[metric]]
xticks = ['Persistence', 'HOFTS','CVFTS','LSTM_UNI','MLP_UNI']
ylab = 'U Statistic'
createBoxplot("boxplot_u_oahu_residual_uni", uni_data, xticks, ylab)
###Output
_____no_output_____
###Markdown
Improvement Table Univariate
###Code
index = ['Persistence', 'SARIMA', 'HOFTS','CVFTS','LSTM_UNI','MLP_UNI']
columns = ['imp(RMSE)', 'imp(SMAPE)', 'imp(U)']
metrics = ['RMSE', 'SMAPE', 'U']
imp_df = pd.DataFrame(columns=columns, index=index)
for metric in metrics:
imp_prst = improvement(persistence_ssa_results[metric], persistence_ssa_results[metric])
imp_sarima = improvement(sarima_ssa_results[metric], persistence_ssa_results[metric])
imp_hofts = improvement(hofts_ssa_results[metric], persistence_ssa_results[metric])
imp_cvfts = improvement(cvfts_ssa_results[metric], persistence_ssa_results[metric])
imp_lstm_uni = improvement(lstm_uni_ssa_results[metric], persistence_ssa_results[metric])
imp_mlp_uni = improvement(mlp_uni_ssa_results[metric], persistence_ssa_results[metric])
imp_df['imp('+metric+')'] = [imp_prst, imp_sarima, imp_hofts, imp_cvfts, imp_lstm_uni, imp_mlp_uni]
print(imp_df.to_latex())
###Output
_____no_output_____ |
docs/beta/notebooks/Grammars2.ipynb | ###Markdown
Fuzzing with GrammarsIn the chapter on ["Mutation-Based Fuzzing"](MutationFuzzer.ipynb), we have seen how to use extra hints – such as sample input files – to speed up test generation. In this chapter, we take this idea one step further, by providing a _specification_ of the legal inputs to a program. These _grammars_ allow for very effective and efficient testing, as we will see in this chapter. **Prerequisites*** You should know how basic fuzzing works, e.g. from the [Chapter introducing fuzzing](Fuzzer.ipynb).* Knowledge on [mutation-based fuzzing](MutationFuzzer.ipynb) and [coverage](Coverage.ipynb) is _not_ required yet, but still recommended. Input LanguagesAll possible behaviors of a program can be triggered by its input. "Input" here can be a wide range of possible sources: We are talking about data read from files, from the environment, or over the network, data input by the user, or data acquired from interaction with other resources. The set of all these inputs determines how the program will behave – including its failures. When testing, it is thus very helpful to think about possible input sources, how to get them under control, and _how to systematically test them_.For the sake of simplicity, we will assume for now that the program has only one source of inputs; this is the same assumption we have been using in the previous chapters, too. The set of valid inputs to a program is called a _language_. Languages range from the simple to the complex: the CSV language denotes the set of valid comma-separated inputs, whereas the Python language denotes the set of valid Python programs. We commonly separate data languages and programming languages, although any program can also be treated as input data (say, to a compiler). The [Wikipedia page on file formats](https://en.wikipedia.org/wiki/List_of_file_formats) lists more than 1,000 different file formats, each of which is its own language. Grammars Rules and ExpansionsTo formally specify input languages, _grammars_ are among the most popular (and best understood) formalisms. A grammar consists of a _start symbol_ and a set of _rules_ which indicate how the start symbol (and other symbols) can be expanded. As an example, consider the following grammar, denoting a sequence of two digits:``` ::= ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```To read such a grammar, start with the starting symbol (``). A rule ` ::= ` means that the symbol on the left side (``) can be replaced by the string on the right side (``). In the above grammar, `` would be replaced by ``.In this string again, `` would be replaced by the string on the right side of the `` rule. The special operator `|` denotes _alternatives_, meaning that any of the digits can be chosen for an expansion. Each `` thus would be expanded into one of the given digits, eventually yielding a string between `00` and `99`. There are no further expansions for `0` to `9`, so we are all set.The interesting thing about grammars is that they can be _recursive_. That is, expansions can make use of symbols expanded earlier – which would then be expanded again. As an example, consider a grammar that describes integers:``` ::= ::= | ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```Here, a `` is either a single digit, or a digit followed by another integer. The number `1234` thus would be represented as a single digit `1`, followed by the integer `234`, which in turn is a digit `2`, followed by the integer `34`.If we wanted to express that an integer can be preceded by a sign (`+` or `-`), we would write the grammar as``` ::= ::= | + | - ::= | ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```These rules formally define the language: Anything that can be derived from the start symbol is part of the language; anything that cannot is not. Arithmetic ExpressionsLet us expand our grammar to cover full _arithmetic expressions_ – a poster child example for a grammar. We see that an expression (``) is either a sum, or a difference, or a term; a term is either a product or a division, or a factor; and a factor is either a number or a parenthesized expression. Amost all rules can have recursion, and thus allow arbitrary complex expressions such as `(1 + 2) * (3.4 / 5.6 - 789)`.``` ::= ::= + | - | ::= * | / | ::= + | - | () | | . ::= | ::= 0 | 1 | 2 | 3 | 4 | 5 | 6 | 7 | 8 | 9```In such a grammar, if we start with `` and then expand one symbol after another, randomly choosing alternatives, we can quickly produce one valid arithmetic expression after another. Such _grammar fuzzing_ is highly effective as it comes to produce complex inputs, and this is what we will implement in this chapter. Representing Grammars in PythonOur first step in building a grammar fuzzer is to find an appropriate format for grammars. To make the writing of grammars as simple as possible, we use a mostly format that is mostly based on strings. Our grammars in Python takes the format of a _mapping_ between symbol names and expansions, where expansions are _lists_ of alternatives. A one-rule grammar for digits thus takes the form
###Code
import fuzzingbook_utils
DIGIT_GRAMMAR = {
"<start>":
["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
}
###Output
_____no_output_____
###Markdown
whereas the full grammar for arithmetic expressions looks like this:
###Code
EXPR_GRAMMAR = {
"<start>":
["<expr>"],
"<expr>":
["<term> + <expr>", "<term> - <expr>", "<term>"],
"<term>":
["<factor> * <term>", "<factor> / <term>", "<factor>"],
"<factor>":
["+<factor>",
"-<factor>",
"(<expr>)",
"<integer>",
"<integer>.<integer>"],
"<integer>":
["<digit><integer>", "<digit>"],
"<digit>":
["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"]
}
###Output
_____no_output_____
###Markdown
In the grammar, we can access any rule by its symbol...
###Code
EXPR_GRAMMAR["<digit>"]
###Output
_____no_output_____
###Markdown
....and we can check whether a symbol is in the grammar:
###Code
"<identifier>" in EXPR_GRAMMAR
###Output
_____no_output_____
###Markdown
Some Definitions We assume that the canonical start symbol is ``:
###Code
START_SYMBOL = "<start>"
###Output
_____no_output_____
###Markdown
The handy `nonterminals()` function extracts the list of nonterminal symbols (i.e., anything between ``) from an expansion.
###Code
import re
# As a symbol, we can have anything between <...> except spaces.
RE_NONTERMINAL = re.compile(r'(<[^<> ]*>)')
def nonterminals(expansion):
# In later chapters, we allow expansions to be tuples,
# with the expansion being the first element
if isinstance(expansion, tuple):
expansion = expansion[0]
return re.findall(RE_NONTERMINAL, expansion)
assert nonterminals("<term> * <factor>") == ["<term>", "<factor>"]
assert nonterminals("<digit><integer>") == ["<digit>", "<integer>"]
assert nonterminals("1 < 3 > 2") == []
assert nonterminals("1 <3> 2") == ["<3>"]
assert nonterminals("1 + 2") == []
assert nonterminals(("<1>", {'option': 'value'})) == ["<1>"]
###Output
_____no_output_____
###Markdown
Likewise, `is_nonterminal()` checks whether some symbol is a nonterminal:
###Code
def is_nonterminal(s):
return re.match(RE_NONTERMINAL, s)
assert is_nonterminal("<abc>")
assert not is_nonterminal("+")
###Output
_____no_output_____
###Markdown
A Simple Grammar FuzzerLet us now put the above grammars to use. We will build a very simple grammar fuzzer that starts with a start symbol (`""`) and then keeps on expanding it. To avoid expansion to infinite inputs, we place a limit (`max_symbols`) on the number of symbols. Furthermore, to avoid being stuck in a situation where we cannot reduce the number of symbols any further, we also limit the total number of expansion steps.
###Code
import random
class ExpansionError(Exception):
pass
def simple_grammar_fuzzer(grammar, start_symbol=START_SYMBOL,
max_nonterminals=10, max_expansion_trials=100, log=False):
term = start_symbol
expansion_trials = 0
while len(nonterminals(term)) > 0:
symbol_to_expand = random.choice(nonterminals(term))
expansion = random.choice(grammar[symbol_to_expand])
new_term = term.replace(symbol_to_expand, expansion, 1)
if len(nonterminals(new_term)) < max_nonterminals:
term = new_term
if log:
print("%-40s" % (symbol_to_expand + " -> " + expansion), term)
expansion_trials = 0
else:
expansion_trials += 1
if expansion_trials >= max_expansion_trials:
raise ExpansionError("Cannot expand " + repr(term))
return term
###Output
_____no_output_____
###Markdown
Let us see how this simple grammar fuzzer obtains an arithmetic expression from the start symbol:
###Code
simple_grammar_fuzzer(grammar=EXPR_GRAMMAR, max_nonterminals=3, log=True)
for i in range(10):
print(simple_grammar_fuzzer(grammar=EXPR_GRAMMAR, max_nonterminals=5))
###Output
+9 / 7 / -(+4) - (+(8)) + -++((-+1 - 5) / 1) / +2 * (+9 + (+(+(+1 + +3420 / -33) * +-6 / 2)))
(8) * (+3 * (19) - ++--+-+-(4 * +93) + 69.28024) - 1.0 - 8
74.5 * (((-+(-0 * +(----+-(+((+-930 / 1))))))) / (+-(-+++-(+(-1 + 4)))) * 8 * -(--5.52) * 5)
0 / (+0) / 3 - --(+((+(((0)))))) - +--5 / (--+-+(2) / +7) + +((5.6 + (+8.7 + 7.6 / ((((-(+-0))) + +-9) * ------((-++7.5)) * +--+-((47 / 34 * ((1) / 4 + +-+(-+9) + 9.1))) - +67))) * 8)
(+9) + +-6 / +(--3 * +(20)) + ++---(+((+--+(8) - (-6) + ((((9) + 35 - -+5.7) * (-(961.4 - -4 * +5.994))))) - ((-+8 * +8)))) / -7
+++--((-((++0)) + (0 * 9 + (63) / 9))) * +(-6) / 9 - -(+-81.7)
9 * --70 - 1 - 56
2 / 5 * +8 / --+--+--+--+-((-+((-2 / (-+4)) / 4 * 9692.2) / -9 + 5 * 62)) - +9 - -8 - 7
8.7 - 25.3 + -20.8
+-+(+(8)) * (1.4) + --(+---1.4 * -+5.901 * -+(++7 + -+5 - --7.0 - (6 + 6 * +24) * -3)) - +8
###Markdown
\todo{Discuss.} Note that this fuzzer is rather inefficient due to the large number of search and replace operations. On the other hand, the implementation is straightforward and does the job. For this chapter, we'll stick to it; in the [next chapter](GrammarFuzzer.ipynb), we'll show how to build a more efficient one. Some Grammars With grammars, we can easily specify the format for several of the examples we discussed earlier. The above arithmetic expressions, for instance, can be directly sent into `bc` (or any other program that takes arithmetic expressions. Let us create some more grammars. Here's one for `cgi_decode()`:
###Code
CGI_GRAMMAR = {
"<start>":
["<string>"],
"<string>":
["<letter>", "<letter><string>"],
"<letter>":
["<plus>", "<percent>", "<other>"],
"<plus>":
["+"],
"<percent>":
["%<hexdigit><hexdigit>"],
"<hexdigit>":
["0", "1", "2", "3", "4", "5", "6", "7",
"8", "9", "a", "b", "c", "d", "e", "f"],
"<other>": # Actually, could be _all_ letters
["0", "1", "2", "3", "4", "5", "a", "b", "c", "d", "e", "-", "_"],
}
for i in range(10):
print(simple_grammar_fuzzer(grammar=CGI_GRAMMAR, max_nonterminals=10))
###Output
%0e_++++ae
+
%bf
2
5
1
+
+%98
++
+%5b%51
###Markdown
Or a URL grammar:
###Code
URL_GRAMMAR = {
"<start>":
["<call>"],
"<call>":
["<url>"],
"<url>":
["<scheme>://<authority><path><query>"],
"<scheme>":
["http", "https", "ftp", "ftps"],
"<authority>":
["<host>", "<host>:<port>", "<userinfo>@<host>", "<userinfo>@<host>:<port>"],
"<host>": # Just a few
["cispa.saarland", "www.google.com", "fuzzingbook.com"],
"<port>":
["80", "8080", "<nat>"],
"<nat>":
["<digit>", "<digit><digit>"],
"<digit>":
["0", "1", "2", "3", "4", "5", "6", "7", "8", "9"],
"<userinfo>": # Just one
["user:password"],
"<path>": # Just a few
["", "/", "/<id>"],
"<id>": # Just a few
["abc", "def", "x<digit><digit>"],
"<query>":
["", "?<params>"],
"<params>":
["<param>", "<param>&<params>"],
"<param>": # Just a few
["<id>=<id>", "<id>=<nat>"],
}
for i in range(10):
print(simple_grammar_fuzzer(grammar=URL_GRAMMAR, max_nonterminals=10))
###Output
ftp://www.google.com/?def=39
ftps://user:[email protected]:8080/abc
ftp://fuzzingbook.com:8080/abc?x25=abc
https://user:[email protected]/abc
https://fuzzingbook.com:78/
https://user:[email protected]/?abc=x45&x91=37&def=92&x78=def&abc=16&x43=x47
http://user:[email protected]:80
ftps://fuzzingbook.com:8?abc=50
ftps://cispa.saarland/abc
https://user:[email protected]:80/
###Markdown
Hatching GrammarsSince grammars are represented as strings, it is fairly easy to introduce errors. So let us introduce a helper function that checks a grammar for consistency.First, this handy `nonterminals()` function gets us the list of nonterminals in an expansion. The helper function `is_valid_grammar()` iterates over a grammar to check whether all used symbols are defined, and vice versa, which is very useful for debugging. You don't have to delve into details here, but as always, it is important to get the input data straight before we make use of it.
###Code
import sys
def is_valid_grammar(grammar, start_symbol=START_SYMBOL):
used_nonterminals = set([start_symbol])
defined_nonterminals = set()
for defined_nonterminal in grammar:
defined_nonterminals.add(defined_nonterminal)
expansions = grammar[defined_nonterminal]
if not isinstance(expansions, list):
print(repr(defined_nonterminal) + ": expansion is not a list",
file=sys.stderr)
return False
if len(expansions) == 0:
print(repr(defined_nonterminal) + ": expansion list empty",
file=sys.stderr)
return False
for expansion in expansions:
if isinstance(expansion, tuple):
expansion = expansion[0]
if not isinstance(expansion, str):
print(repr(defined_nonterminal) + ": "
+ repr(expansion) + ": not a string",
file=sys.stderr)
return False
for used_nonterminal in nonterminals(expansion):
used_nonterminals.add(used_nonterminal)
for unused_nonterminal in defined_nonterminals - used_nonterminals:
print(repr(unused_nonterminal) + ": defined, but not used",
file=sys.stderr)
for undefined_nonterminal in used_nonterminals - defined_nonterminals:
print(repr(undefined_nonterminal) + ": used, but not defined",
file=sys.stderr)
return used_nonterminals == defined_nonterminals
###Output
_____no_output_____
###Markdown
Our grammars defined above pass the test:
###Code
assert is_valid_grammar(EXPR_GRAMMAR)
assert is_valid_grammar(CGI_GRAMMAR)
assert is_valid_grammar(URL_GRAMMAR)
###Output
_____no_output_____
###Markdown
But these ones don't:
###Code
assert not is_valid_grammar({"<start>": ["<x>"], "<y>": ["1"]})
assert not is_valid_grammar({"<start>": "123"})
assert not is_valid_grammar({"<start>": []})
assert not is_valid_grammar({"<start>": [1, 2, 3]})
###Output
'<start>': 1: not a string
|
notebooks/xgboost/train-iris.ipynb | ###Markdown
Train with xgboostdescription: train xgboost model on iris data
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
ws
import git
from pathlib import Path
# get root of git repo
prefix = Path(git.Repo(".", search_parent_directories=True).working_tree_dir)
# training script
script_dir = prefix.joinpath("code", "models", "xgboost", "iris")
script_name = "train.py"
# environment file
environment_file = prefix.joinpath("environments", "xgboost-example.txt")
# azure ml settings
environment_name = "xgboost-iris-example"
experiment_name = "xgboost-iris-example"
compute_target = "cpu-cluster"
print(open(script_dir.joinpath(script_name)).read())
from azureml.core import ScriptRunConfig, Experiment, Environment
env = Environment.from_pip_requirements(environment_name, environment_file)
src = ScriptRunConfig(
source_directory=script_dir,
script=script_name,
environment=env,
compute_target=compute_target,
)
run = Experiment(ws, experiment_name).submit(src)
run
from azureml.widgets import RunDetails
RunDetails(run).show()
run.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Train with xgboostdescription: train xgboost model on iris data
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
ws
import git
from pathlib import Path
# get root of git repo
prefix = Path(git.Repo(".", search_parent_directories=True).working_tree_dir)
# training script
script_dir = prefix.joinpath("code", "models", "xgboost", "iris")
script_name = "train.py"
# environment file
environment_file = prefix.joinpath("environments", "xgboost-example.txt")
# azure ml settings
environment_name = "xgboost-iris-example"
experiment_name = "xgboost-iris-example"
compute_target = "cpu-cluster"
print(open(script_dir.joinpath(script_name)).read())
from azureml.core import ScriptRunConfig, Experiment, Environment
env = Environment.from_pip_requirements(environment_name, environment_file)
src = ScriptRunConfig(
source_directory=script_dir,
script=script_name,
environment=env,
compute_target=compute_target,
)
run = Experiment(ws, experiment_name).submit(src)
run
from azureml.widgets import RunDetails
RunDetails(run).show()
run.wait_for_completion(show_output=True)
###Output
_____no_output_____ |
docs/python/seaborn/KDEplot.ipynb | ###Markdown
---title: "KDEplot"author: "Aavinash"date: 2020-09-04description: "-"type: technical_notedraft: false---
###Code
import seaborn as sns
rng = np.random.RandomState(0)
x = np.linspace(0, 10, 500)
y = np.cumsum(rng.randn(500, 6), 0)
plt.plot(x, y)
plt.legend('ABCDEF', ncol=2, loc='upper left');
for col in 'xy':
sns.kdeplot(data[col], shade=True)
###Output
_____no_output_____ |
Archieve/4.DBScan Clusters with Doc2Word_v2.0.ipynb | ###Markdown
1.1 Word Embedding
###Code
## Word Embeddings Functions
## Generate the tagged documents (tagging based on the category column)
def create_tagged_document(list_of_list_of_words):
for i, list_of_words in enumerate(list_of_list_of_words):
yield gensim.models.doc2vec.TaggedDocument(list_of_words, [i])
## Generate the tagged documents (each record in single tag )
def create_tagged_document_based_on_tags(list_of_list_of_words, tags):
for i in range(len(list_of_list_of_words)):
yield gensim.models.doc2vec.TaggedDocument(list_of_list_of_words[i], [tags[i]])
## Generate output using the word embedding model prediction - takes long time to regenerate
def vec_for_learning(model, tagged_docs):
sents = tagged_docs#.values
targets, regressors = zip(*[(doc.tags[0], model.infer_vector(doc.words, steps=20)) for doc in sents])
return targets, regressors
## creating a tagged document
DescDict=[[x for x in str(i).split()] for i in df.PreProcessedDescription]
tagged_value_tags = list(create_tagged_document_based_on_tags(DescDict, df.Category.tolist()))
tagged_value = list(create_tagged_document(DescDict))
print(str(datetime.datetime.now()),'Started')
# Init the Doc2Vec model
model = gensim.models.doc2vec.Doc2Vec(vector_size=50, min_count=5, epochs=40, alpha = 0.02, dm=1, workers=multiprocessing.cpu_count())
#### Hyper parameter ####
## vector_size – Dimensionality of the feature vectors.
## If dm=1, ‘distributed memory’ (PV-DM) (CBOW - similar to continuous bag-of-words)
## alpha - The initial learning rate.
## min_count – Ignores all words with total frequency lower than this.
# Build the Volabulary
model.build_vocab(tagged_value)
model.train(tagged_value, total_examples=len(tagged_value), epochs=40)
print(str(datetime.datetime.now()),'Completed')
## Validating the model response for random words
modelchecked=model
target_word='environment'
print('target_word: %r model: %s similar words:' % (target_word, modelchecked))
for i, (word, sim) in enumerate(modelchecked.wv.most_similar(target_word, topn=20), 1):
print(' %d. %.2f %r' % (i, sim, word))
###Output
target_word: 'environment' model: Doc2Vec(dm/m,d50,n5,w5,mc5,s0.001,t4) similar words:
1. 0.65 'situation'
2. 0.65 'system'
3. 0.63 'constantly'
4. 0.62 'kind'
5. 0.61 'way'
6. 0.60 'habitat'
7. 0.59 'reservoir'
8. 0.58 'contexts'
9. 0.58 'resource'
10. 0.58 'climatically'
11. 0.57 'scenario'
12. 0.57 'continuously'
13. 0.57 'setting'
14. 0.57 'potentially'
15. 0.55 'environments'
16. 0.55 'object'
17. 0.55 'community'
18. 0.55 'circumstance'
19. 0.55 'obviously'
20. 0.55 'area'
###Markdown
1.2. PCA
###Code
## PCA - reducing the dimenstion
ps=10
pcamodel = PCA(n_components=ps)
pca=pcamodel.fit_transform(model.docvecs.vectors_docs)
print('PCA components :',ps,'Variance coveragence' ,np.max(pcamodel.explained_variance_ratio_.cumsum())*100)
dummies=pd.get_dummies(df['Category'])
merged_data=pd.concat([df,dummies], axis=1,ignore_index=False)
merged_data=pd.concat([merged_data,pd.DataFrame(pca)], axis=1,ignore_index=False)
merged_data=merged_data[pd.isnull(merged_data["Category"])==False]
merged_data['DBScanCluster']=0
###Output
_____no_output_____
###Markdown
2. DBScan
###Code
### DBSCAN - Density-Based Spatial Clustering of Applications with Noise.
# Finds core samples of high density and expands clusters from them.
FeatureCols=list(range(ps))
for cat in merged_data.Category.unique():
print(str(datetime.datetime.now()),'Started')
CategoricalDS= merged_data[FeatureCols][merged_data.Category==cat]
clusterer = DBSCAN(eps=2.6, min_samples=5, n_jobs=4)
#### Hyper parameter ####
# eps - The maximum distance between two samples for one to be considered as in the neighborhood of the other.
# min_samples -The number of samples (or total weight) in a neighborhood for a point to be considered as a core point
preds = clusterer.fit_predict(CategoricalDS)
merged_data.loc[merged_data.Category==cat,'DBScanCluster']=preds
print('******'+cat+'******')
print(pd.Series(preds).value_counts())
score = silhouette_score(CategoricalDS, preds, metric='euclidean')
print('silhouette score:',score)
print(str(datetime.datetime.now()),'Completed')
print('')
merged_data['DBScanCluster'].value_counts()
## Reseting the index, converting category to int for supervised learning
def CattoID(input_cat):
if(input_cat=='Engineering Sciences'):
return 0
elif(input_cat=='Humanities and Social Sciences'):
return 1
elif(input_cat=='Natural Sciences'):
return 2
elif(input_cat=='Life Sciences'):
return 3
else :
return -1
merged_data=merged_data.reset_index()[merged_data.columns[0:]]
merged_data['CategoryConv']=merged_data.Category.apply(CattoID)
merged_data['CategoryConv']=merged_data['CategoryConv'].astype('int')
###Output
_____no_output_____
###Markdown
3. Supervised learning
###Code
Features=merged_data.columns[16:len(merged_data.columns)-2] #list(range(500))
merged_data[Features]=MinMaxScaler().fit_transform(merged_data[Features])
OP_Feature='CategoryConv'
## Training & Test data are splitted based on the DBScanCluster result. outlier data are considering as test data to reevaluate.
X_Training_DS=merged_data[Features][merged_data.DBScanCluster==0]
y_Training_DS=merged_data[OP_Feature][merged_data.DBScanCluster==0]
X_Test_DS=merged_data[Features][merged_data.DBScanCluster!=0]
y_Test_DS=merged_data[OP_Feature][merged_data.DBScanCluster!=0]
X_train, X_test, y_train, y_test = train_test_split(X_Training_DS,y_Training_DS, test_size=0.2, random_state=0)
###Output
_____no_output_____
###Markdown
3.1 NaiveBayes
###Code
modelNB = MultinomialNB(alpha=1)
#### Hyper parameter ####
# alpha - Additive (Laplace/Lidstone) smoothing parameter (0 for no smoothing).
modelNB.fit(X_train, y_train)
nfolds=5
scores=cross_val_score(modelNB, X_Training_DS,y_Training_DS, cv=nfolds, scoring="accuracy")
pd.Series(scores).plot(kind="box", label="Accuracy");
plt.title('Accuracy_score from '+str(nfolds)+' Folds (Accuracy) for '+str(round(pd.Series(scores).mean(), 2)))
y_pred = modelNB.predict(X_test)
print('Accuracy Score : '+str(accuracy_score(y_test,y_pred )*100))
###Output
Accuracy Score : 34.91118288251918
###Markdown
3.1 k-nearest neighbors
###Code
for k in [4,8,16,25,30]:
modelKBC = KNeighborsClassifier(n_neighbors=k, weights='distance')
#### Hyper parameter ####
# n_neighbors - Number of neighbors to use by default for kneighbors queries
# weights - weight function used in prediction (‘distance’ : weight points by the inverse of their distance.
#in this case, closer neighbors of a query point will have a greater influence than neighbors which are further away.)
modelKBC.fit(X_train, y_train)
y_pred = modelKBC.predict(X_test)
print('neighbors:',k,'Accuracy Score : '+str(accuracy_score(y_test,y_pred )))
#nfolds=3
#scores=cross_val_score(modelKBC, X_train,y_train, cv=nfolds, scoring="accuracy")
#pd.Series(scores).plot(kind="box", label="Accuracy");
#plt.title('Accuracy_score from '+str(nfolds)+' Folds (Accuracy) for '+str(round(pd.Series(scores).mean(), 2)))
k=25
modelKBC = KNeighborsClassifier(n_neighbors=k, weights='distance')
modelKBC.fit(X_train, y_train)
y_pred = modelKBC.predict(X_test)
print('neighbors:',k,'Accuracy Score : '+str(accuracy_score(y_test,y_pred )))
nfolds=3
scores=cross_val_score(modelKBC, X_train,y_train, cv=nfolds, scoring="accuracy")
pd.Series(scores).plot(kind="box", label="Accuracy");
plt.title('Accuracy_score from '+str(nfolds)+' Folds (Accuracy) for '+str(round(pd.Series(scores).mean(), 2)))
print(str(datetime.datetime.now()),'Started')
modelSVC = svm.LinearSVC(C=0.01)
#### Hyper parameter ####
# C - The strength of the regularization is inversely proportional to C.
modelSVC.fit(X_train, y_train)
print(str(datetime.datetime.now()),'Fit Completed')
nfolds=3
scores=cross_val_score(modelSVC, X_train, y_train, cv=nfolds, scoring="accuracy")
pd.Series(scores).plot(kind="box", label="Accuracy");
plt.title('Accuracy_score from '+str(nfolds)+' Folds (Accuracy) for '+str(round(pd.Series(scores).mean(), 2)))
y_pred = modelSVC.predict(X_test)
print('Accuracy Score : '+str(accuracy_score(y_test,y_pred )*100))
print(str(datetime.datetime.now()),'Completed')
###Output
2020-01-24 10:53:11.357936 Started
2020-01-24 10:53:11.732016 Fit Completed
Accuracy Score : 83.44771901493743
2020-01-24 10:53:12.725544 Completed
###Markdown
4. Formatting the output categories based on the predict_proba
###Code
## Based on predict_proba result. reorder to values and categories based on high probablity.
def name_max_value(DF):
colname='Category_1_Values'
if (DF['Engineering Sciences']==DF[colname]):
return 'Engineering Sciences'
elif (DF['Humanities and Social Sciences']==DF[colname]):
return 'Humanities and Social Sciences'
elif (DF['Natural Sciences']==DF[colname]):
return 'Natural Sciences'
elif (DF['Life Sciences']==DF[colname]):
return 'Life Sciences'
else:
return ''
def name_sec_max_value(DF):
colname='Category_2_Values'
if(DF[colname]==0):
return ''
elif ((DF['Engineering Sciences']==DF[colname]) & (DF['Category_1']!='Engineering Sciences')):
return 'Engineering Sciences'
elif ((DF['Humanities and Social Sciences']==DF[colname]) & (DF['Category_1']!='Humanities and Social Sciences')):
return 'Humanities and Social Sciences'
elif ((DF['Natural Sciences']==DF[colname]) & (DF['Category_1']!='Natural Sciences')):
return 'Natural Sciences'
elif ((DF['Life Sciences']==DF[colname]) & (DF['Category_1']!='Life Sciences')):
return 'Life Sciences'
else:
return ''
def name_3rd_max_value(DF):
colname='Category_3_Values'
if(DF[colname]==0):
return ''
elif ((DF['Engineering Sciences']==DF[colname]) & (DF['Category_2']!='Engineering Sciences')):
return 'Engineering Sciences'
elif ((DF['Humanities and Social Sciences']==DF[colname]) & (DF['Category_2']!='Humanities and Social Sciences')):
return 'Humanities and Social Sciences'
elif ((DF['Natural Sciences']==DF[colname]) & (DF['Category_2']!='Natural Sciences')):
return 'Natural Sciences'
elif ((DF['Life Sciences']==DF[colname]) & (DF['Category_2']!='Life Sciences')):
return 'Life Sciences'
else:
return ''
cols=['Engineering Sciences','Humanities and Social Sciences','Natural Sciences','Life Sciences']
PredictedValues=pd.DataFrame(modelKBC.predict_proba(merged_data[Features]), columns=cols)
PredictedValues['Category_1_Values']=PredictedValues[cols].apply(np.max,axis=1)
PredictedValues['Category_2_Values']=PredictedValues[cols].apply(np.sort,axis=1).apply(lambda x:x[2])
PredictedValues['Category_3_Values']=PredictedValues[cols].apply(np.sort,axis=1).apply(lambda x:x[1])
PredictedValues['Category_1']=PredictedValues.apply(name_max_value,axis=1)
PredictedValues['Category_2']=PredictedValues.apply(name_sec_max_value,axis=1)
PredictedValues['Category_3']=PredictedValues.apply(name_3rd_max_value,axis=1)
PredictedValues['Category_12_Variance']=PredictedValues.apply(lambda x :x['Category_1_Values']-x['Category_2_Values'], axis=1)
PredictedValues['Category_23_Variance']=PredictedValues.apply(lambda x :x['Category_2_Values']-x['Category_3_Values'], axis=1)
###Output
_____no_output_____
###Markdown
5.1. Random manual result evaluvation
###Code
PredictedValues.head(16694).tail(5)
## regenerating dataset
NewMergedDSAligned=pd.concat([merged_data[merged_data.columns.tolist()[:12]+['DBScanCluster']],PredictedValues[PredictedValues.columns[4:]]], axis=1, ignore_index=False)
#(NewMergedDSAligned.DBScanCluster!=0) &
NewMergedDSAligned['DBScanCluster'][ (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].value_counts()
NewMergedDSAligned['Category'][(NewMergedDSAligned.DBScanCluster!=0) & (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].value_counts()
cats='Natural Sciences'
lim=200
NewMergedDSAligned[['Translates','Category']+NewMergedDSAligned.columns[13:].tolist()][(NewMergedDSAligned['Category_1']!=cats) & (NewMergedDSAligned['Category']==cats) & (NewMergedDSAligned.DBScanCluster==0) & (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].sort_values('Category_1_Values', ascending=False).head(lim).tail(5)
#cats='Humanities and Social Sciences'
NewMergedDSAligned[['Translates','Category_1_Values']][(NewMergedDSAligned['Category_1']!=cats) & (NewMergedDSAligned['Category']==cats) & (NewMergedDSAligned.DBScanCluster==0) & (NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1'])].sort_values('Category_1_Values', ascending=False).Translates.head(lim).tail(5).tolist()#.tail().
#NewMergedDSAligned.to_csv(Path+'WEPCADBScanFindingsKMeans.csv', index=False)
###Output
_____no_output_____
###Markdown
5.2. Each category TF/IDF based result evaluvation
###Code
#&(NewMergedDSAligned['Category']==cats) &(NewMergedDSAligned['Category_1']==check_cat)
input_data=NewMergedDSAligned[(NewMergedDSAligned['Category']!=NewMergedDSAligned['Category_1']) & (NewMergedDSAligned.DBScanCluster!=0) ]
input_data.loc[:,'CategoryCollc']=input_data[['Category','Category_1','Category_2','Category_3']].apply(lambda x:x[0]+','+x[1]+','+x[2]+','+x[3], axis=1)
#input_data.loc[:,'CategoryCollc']=input_data[['Category','Category_1']].apply(lambda x:x[0]+','+x[1], axis=1)
input_data.loc[:,'CategoryCollc']=input_data['CategoryCollc'].str.strip(",")
varcluster_info.cluster_id=varcluster_info.cluster_id.astype('int32')
varclusterall=varcluster.merge(varcluster_info, how='left',left_on='Cluster', right_on='cluster_id')
varclusterall=varclusterall[varclusterall.RS_Ratio<.98]
def find_category(target_word):
try :
sim_word=list(map(lambda x:x[0] ,modelchecked.wv.most_similar(target_word, topn=5)))
finalcategory=varclusterall[varclusterall.Variable.isin(sim_word)].category.value_counts().sort_values(ascending=False).head(1).index
if(len(finalcategory)>0):
return finalcategory[0]
else:
return np.NaN
except :
return np.NaN
input_data.head()
sizes=len(input_data.CategoryCollc.unique())
#plt.subplots(figsize=(8,150))
j=1
for i,bucket in input_data.groupby(['CategoryCollc']):
print(i.split(',')[0],'-',i.split(',')[1:],': Number of Documents -',len(bucket))
if(len(bucket)>1):
vectorizer = TfidfVectorizer(max_features=20, ngram_range=(1, 1))
review_vectors = vectorizer.fit_transform(bucket["PreProcessedDescription"])
features_df = pd.DataFrame(review_vectors.toarray(), columns = vectorizer.get_feature_names())
varcat=pd.DataFrame(features_df.sum().sort_values(ascending=False)).merge(varclusterall, how='left', left_index=True, right_on='Variable')[['Variable','category']]
varcat.category=varcat[['Variable', 'category']].apply(lambda x: find_category(x.Variable) if(pd.isnull(x['category'])) else x['category'], axis=1)
#print(varcat.category.value_counts())
#print(varcat.apply(lambda x: x.Variable +' - NA' if(pd.isnull(x.category)) else x.Variable +' - '+x.category , axis=1))
print('Rare words',list(varcat[varcat.category!='General'].Variable))
else:
print(bucket.Translates.tolist())
print('----------------------------------------------------------')
#print(features_df.sum().sort_values(ascending=False),'\n')
#vectorizer.get_feature_names()
#plt.subplot(1,sizes,j)
#features_df.sum().sort_values(ascending=False).plot(kind='bar',color='green')
#plt.title(i.split(',')[0]+' -'+','.join(i.split(',')[1:]))
#plt.xticks(rotation=60)
#j=j+1
#plt.tight_layout()
###Output
Engineering Sciences - ['Humanities and Social Sciences'] : Number of Documents - 6
Rare words ['metal', 'landscape', 'design', 'jewish', 'medium', 'research', 'study', 'political', 'conflict', 'architecture', 'building', 'architectural', 'public', 'mass', 'east', 'process', 'history']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Engineering Sciences'] : Number of Documents - 14
Rare words ['building', 'system', 'architecture', 'design', 'architectural', 'new', 'research', 'concept', 'learning', 'urban', 'model', 'development', 'study', 'develop', 'different', 'peer']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Engineering Sciences', 'Life Sciences'] : Number of Documents - 7
Rare words ['method', 'perception', 'privacy', 'research', 'vr', 'ess', 'site', 'user', 'development', 'energy', 'model', 'approach', 'exist', 'location', 'regional', 'develop']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Engineering Sciences', 'Natural Sciences'] : Number of Documents - 7
Rare words ['urban', 'design', 'good', 'newspaper', 'new', 'development', 'research', 'study', 'function', 'method', 'barter', 'practice', 'party', 'historic', 'develop', 'historical', 'national']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Life Sciences'] : Number of Documents - 2
Rare words ['sound', 'shift', 'melatonin', 'night', 'noise', 'light', 'judgement', 'process', 'test', 'suppression', 'effect', 'health', 'low', 'profile', 'high', 'synthesis', 'experiment', 'hourly']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Life Sciences', 'Engineering Sciences'] : Number of Documents - 9
Rare words ['different', 'planning', 'model', 'noise', 'impact', 'system', 'video', 'self', 'pain', 'state', 'brain', 'approach', 'carshare', 'training', 'study', 'control']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Natural Sciences'] : Number of Documents - 3
Rare words ['housing', 'jewish', 'facility', 'naumburg', 'schultze', 'local', 'community', 'architecture', 'building', 'network', 'national', 'art', 'process', 'research', 'architectural', 'berlin', 'new', 'center']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Natural Sciences', 'Engineering Sciences'] : Number of Documents - 6
Rare words ['urban', 'city', 'environmental', 'new', 'research', 'infrastructure', 'form', 'west', 'bengal', 'historical', 'ideal', 'area', 'planning', 'landscape', 'cultural', 'development', 'element']
----------------------------------------------------------
Engineering Sciences - ['Humanities and Social Sciences', 'Natural Sciences', 'Life Sciences'] : Number of Documents - 1
["'Ecosystem services' (ESS) has become a key term of the international, the European and increasingly also the German debates on nature conservation and landscape management. It may be regarded as an indicator of a programmatic reorientation of biodiversity policies in an economic vein. It has hardly been studied hitherto how governing in the policy area 'nature conservation and landscape management' is changing in Germany with the increased use of the term 'ecosystem services'. For instance, does an economisation or neoliberalisation of nature and landscape occur, that is, an expansion of the application of economic and market-based principles, as is often described at the international level? Or do counteracting forces prevail that end up reinforcing the well-established relationship of governmental regulation, civil society involvement and market forces? Or is a specific novel understanding of nature and landscape policies developing in the course of the ESS discourses currently being produced in Germany? - These fundamental questions lie at the heart of the proposed project. The aim is to study ESS discourses in Germany from the perspective of governmentality research. It is to be analysed how nature conservation and landscape management are debated in connection with the economically influenced ESS concept. The focus is on the problematisations and rationalities of governing in the policy area 'nature conservation and landscape management', in particular on the dynamics of changes in these problematisations and rationalities in Germany. Closely related is the question which changes are to be observed in how the objects of these policies (that is, nature, landscape, biological diversity, planning etc.) are constituted as a part of problematisations and whether even entirely new objects arise in the course of the ES discourses. The project is conceived as a discourse analysis, relying on quantitative lexicometric methods as well as on qualitative empirical methods such as document analyses, semi-structured interviews and participant observation. Among others, the initiatives 'Nature capital Germany - TEEB DE' and 'Implementation of Action 5 of the EU Biodiversity Strategy in Germany' (MAES DE) are to be studied in depth. "]
----------------------------------------------------------
Engineering Sciences - ['Life Sciences'] : Number of Documents - 5
Rare words ['cell', 'brain', 'nirs', 'olg', 'tissue', 'lung', 'model', 'shall', 'experiment', 'perfusion', 'crs', 'eit', 'regional', 'signal', 'develop']
----------------------------------------------------------
Engineering Sciences - ['Life Sciences', 'Engineering Sciences'] : Number of Documents - 8
Rare words ['cell', 'blood', 'process', 'system', 'surfactin', 'cementum', 'bladder', 'model', 'culture', 'method', 'development', 'contact', 'study', 'co', 'analysis', 'synaptic', 'neural', 'image']
----------------------------------------------------------
Engineering Sciences - ['Life Sciences', 'Engineering Sciences', 'Humanities and Social Sciences'] : Number of Documents - 4
Rare words ['bci', 'motion', 'system', 'model', 'mutation', 'sickness', 'glucose', 'patient', 'test', 'mutant', 'detect', 'suite', 'insulin', 'diabetes', 'control', 'approach', 'influence', 'study']
----------------------------------------------------------
Engineering Sciences - ['Life Sciences', 'Engineering Sciences', 'Natural Sciences'] : Number of Documents - 14
Rare words ['cell', 'system', 'control', 'production', 'microfluidic', 'method', 'surface', 'process', 'high', 'protein', 'nanoparticle', 'sample', 'detection', 'biomarker', 'develop', 'electrode', 'wood', 'low']
----------------------------------------------------------
Engineering Sciences - ['Life Sciences', 'Humanities and Social Sciences'] : Number of Documents - 2
Rare words ['leg', 'network', 'oscillator', 'neuronal', 'cpg', 'coupling', 'mechanism', 'know', 'signal', 'different', 'understand', 'system', 'neuron', 'type', 'inter', 'insect', 'model', 'stick', 'joint', 'influence']
----------------------------------------------------------
Engineering Sciences - ['Life Sciences', 'Humanities and Social Sciences', 'Engineering Sciences'] : Number of Documents - 5
Rare words ['algorithm', 'water', 'eye', 'speech', 'assessment', 'test', 'impact', 'method', 'processing', 'methodological', 'develop', 'eeg', 'signal', 'use', 'consumption', 'footprint', 'intelligibility', 'manufacturing']
----------------------------------------------------------
Engineering Sciences - ['Life Sciences', 'Humanities and Social Sciences', 'Natural Sciences'] : Number of Documents - 1
["The BULB project aims at supporting the documentation of unwritten languages with the help of automatic speech and language processing, in particular automatic speech recognition (ASR) and machine translation (MT). We will address the documentation of three mostly unwritten African languages of the Bantu family (Basaa, Myene and Embosi). The main steps of the project are:1. To collect the corpora at a reasonable cost, using a three step methodology, following the work of S. Bird and M. Liberman:collecting a large corpus of speech (100 hours) in a community, including elicited material, stories, dialogs and broadcasts;re-speaking. As the sound quality of the recordings will be very spontaneous, with possibly overlapping speech in noisy environments, carefully articulated re-speaking by a reference speaker will give rise to more accurate automatic phonetic transcriptions and to improved material for phonetic/phonological studies.oral translation. Translation is the natural way to document a new language; oral translations will accelerate the documentation process. Our Bantu data will be translated to French, a major language and a second language in the regions of our studied communities.2. The collected oral data (Bantu originals and French translations) contain the necessary information to document the studied languages. ASR is expected to automatically produce accurate transcriptions in source and target languages and MT to provide meaningful alignments between both, to speed up the major tasks of documentation, description and analysis. The major automatic processing steps are:phonetic transcription of the studied languages. This step requires first a set of language-independent phone models which must be tuned to the language under study via unsupervised adaptation techniques;word transcription of the oral French translations. Language and acoustic models need to be adapted to obtain high transcription accuracy;alignments between the phonetic transcriptions (originals, respeaking) of the studied language. Alignments are highly valuable for large scale acoustic-phonetic studies, phonological and prosodic data mining and dialectal variations studies;cross-language alignments that aim at linking phone sequences in the studied language with French words. Such alignments may prove very useful for morphological studies, vocabulary and pronunciation elaboration.The success of the project relies on a strong German-French cooperation between linguists and computer scientists. Cooperations will be fostered and strengthened by a series of courses benefiting the scientific community beyond the present consortium. During these courses, linguists will present to computer scientists the major steps to document an unknown language, and computer scientists will introduce their methods to process a 'new' language thus generating phonetic transcriptions and pseudo-word alignments to be returned to linguists. "]
----------------------------------------------------------
Engineering Sciences - ['Life Sciences', 'Natural Sciences'] : Number of Documents - 5
###Markdown
Visualization
###Code
def CattoID(input_cat):
if(input_cat=='Engineering Sciences'):
return 0
elif(input_cat=='Humanities and Social Sciences'):
return 1
elif(input_cat=='Natural Sciences'):
return 2
elif(input_cat=='Life Sciences'):
return 3
else :
return -1
NewMergedDSAligned2=pd.concat([merged_data,PredictedValues[PredictedValues.columns[4:]]], axis=1, ignore_index=False)
NewMergedDSAligned2.loc[:,'Category_1_ID']=NewMergedDSAligned2.Category_1.apply(CattoID)
NewMergedDSAligned2.loc[:,'Category_2_ID']=NewMergedDSAligned2.Category_2.apply(CattoID)
NewMergedDSAligned2.loc[:,'Category_3_ID']=NewMergedDSAligned2.Category_3.apply(CattoID)
NewMergedDSAligned2=pd.DataFrame(enumerate(NewMergedDSAligned2.SubjectArea.unique()), columns=['Subjectid','SubjectAreaMatching']).merge(NewMergedDSAligned2,left_on='SubjectAreaMatching', right_on='SubjectArea')
cats=['Engineering Sciences','Humanities and Social Sciences', 'Life Sciences','Natural Sciences']
cats_dist=[]
## Finiding the overall similiarity
for c, w in NewMergedDSAligned2[(NewMergedDSAligned2['Category']!=NewMergedDSAligned2['Category_1']) & (NewMergedDSAligned2['DBScanCluster']!=0)].groupby('Category'):
#print('')
#print(c, len(w))
#other_cat=list(filter(lambda x:x!=c, cats))
cat_dist=[]
for oc in cats:
if oc==c:
oc_sim=0
else:
oc_sum=sum(w[w['Category_1']==oc].Category_1_Values.tolist()+w[w['Category_2']==oc].Category_2_Values.tolist()+w[w['Category_3']==oc].Category_3_Values.tolist())
oc_sim=oc_sum/len(w)
cat_dist.append(oc_sim)
#print(c,':',oc,'-', round(oc_sim,2))
#oc_sum=w[w['Category_1']==oc].Category_1_Values.tolist()+w[w['Category_2']==oc].Category_2_Values.tolist()+w[w['Category_3']==oc].Category_3_Values.tolist()
#oc_sim=sum(oc_sum)/len(oc_sum)
#print(c,':',oc,'-', round(oc_sim,2))
cats_dist.append(np.array(cat_dist))
cats_dist=np.array(cats_dist)
## Making symmetric matrix
sym_dist=np.zeros(cats_dist.shape)
for i in range(cats_dist.shape[0]):
for j in range(cats_dist.shape[0]):
sym_dist[i][j]=cats_dist[i][j]+ cats_dist[j][i]
if(i==j):
sym_dist[i][j]=1
# 1-x : convert similiarity to distance
sym_dist=1-pd.DataFrame(sym_dist, columns=cats, index=cats)
## Generating coordinates from distance
#, angle=0.8
#coords = TSNE(n_components=2,perplexity=.1, random_state=12, metric='precomputed').fit_transform(sym_dist)
#coords = TSNE(n_components=2,perplexity=.1, random_state=23, metric='precomputed').fit_transform(sym_dist)
coords = PCA(n_components=2, svd_solver = 'full').fit_transform(sym_dist)
coords=MinMaxScaler([0,1000]).fit_transform(coords)
coords=pd.DataFrame(coords, index=cats).reset_index()
p1=sns.scatterplot(
x=0, y=1,
hue="index",
# palette=sns.color_palette("hls", 4),
data=coords,
# legend="full",
alpha=1,
size = 8,
legend=False
);
for line in range(0,coords.shape[0]):
p1.text(coords[0][line]+0.01, coords[1][line], cats[line], horizontalalignment='left', size='medium', color='black')
sym_dist
newrange=pd.DataFrame(NewMergedDSAligned2.Category.value_counts()/80).reset_index().merge(coords,left_on='index',right_on='index')
newrange.loc[:,'Min_X']=newrange[0]-newrange['Category']
newrange.loc[:,'Max_X']=newrange[0]+newrange['Category']
newrange.loc[:,'Min_Y']=newrange[1]-(newrange['Category']*.60)
newrange.loc[:,'Max_Y']=newrange[1]+(newrange['Category']*.60)
newrange.columns=['Category','size', 0, 1, 'Min_X', 'Max_X', 'Min_Y', 'Max_Y']
newrange
pca.shape
catsperplexity={'Engineering Sciences':5,'Humanities and Social Sciences':5, 'Life Sciences':10,'Natural Sciences':8}
## T-SNE separately for each categories
outerclusterfeatures=['Category_1_Values','Category_1_ID','Category_2_ID','Category_2_Values','Category_3_ID','Category_3_Values','Subjectid']
#Doc2VecModelData=pd.concat([pd.DataFrame(model.docvecs.vectors_docs),NewMergedDSAligned2[outerclusterfeatures]], axis=1)
Doc2VecModelData=pd.concat([pd.DataFrame(pca),NewMergedDSAligned2[outerclusterfeatures]], axis=1)
Doc2VecModelData['tsne-2d-one']=0
Doc2VecModelData['tsne-2d-two']=0
for cat in cats:#['Life Sciences']:#
print(str(datetime.datetime.now()),'Started for', cat)
tsne = TSNE(n_components=2, perplexity=catsperplexity[cat], n_iter=300, random_state=0, learning_rate=100)
## The perplexity is related to the number of nearest neighbors that is used in other manifold learning algorithms.
## Larger datasets usually require a larger perplexity. Consider selecting a value between 5 and 50.
tsne_results = tsne.fit_transform(Doc2VecModelData[NewMergedDSAligned2.Category==cat])
Doc2VecModelData.loc[NewMergedDSAligned2.Category==cat,'tsne-2d-one'] = tsne_results[:,0]
Doc2VecModelData.loc[NewMergedDSAligned2.Category==cat,'tsne-2d-two'] = tsne_results[:,1]
print(str(datetime.datetime.now()),'Completed for', cat)
Doc2VecModelData.loc[:,'Category'] = NewMergedDSAligned2.Category
Doc2VecModelData.loc[:,'Category_1'] = NewMergedDSAligned2.Category_1
# Reshaping
for cat in cats:
model_x=MinMaxScaler([newrange[newrange['Category']==cat].Min_X.values[0],newrange[newrange['Category']==cat].Max_X.values[0]])
Doc2VecModelData.loc[Doc2VecModelData['Category']==cat,'tsne-2d-one']=model_x.fit_transform(Doc2VecModelData[Doc2VecModelData['Category']==cat][['tsne-2d-one']])
model_y=MinMaxScaler([newrange[newrange['Category']==cat].Min_Y.values[0],newrange[newrange['Category']==cat].Max_Y.values[0]])
Doc2VecModelData.loc[Doc2VecModelData['Category']==cat,'tsne-2d-two']=model_y.fit_transform(Doc2VecModelData[Doc2VecModelData['Category']==cat][['tsne-2d-two']])
cat='Life Sciences'#'Engineering Sciences'#'Life Sciences'#'Humanities and Social Sciences'#'Life Sciences'#'
plt.figure(figsize=(13,8))
sns.scatterplot(
x="tsne-2d-one", y="tsne-2d-two",
hue="Category_1",
data=Doc2VecModelData[Doc2VecModelData.Category==cat],
legend="full",
# style='Category_1',
alpha=0.8
);
plt.figure(figsize=(13,8))
sns.scatterplot(
x="tsne-2d-one", y="tsne-2d-two",
hue="Category_1",
data=Doc2VecModelData,
legend="full",
style='Category',
alpha=0.8
);
def label_genarator(input):
if((input.Category==input.Category_1) or (input.DBScanCluster==0)):
return ''#'Category : '+input.Category
else:
if((input.Category_3_Values==0) and (input.Category_2_Values==0)):
return '('+input.Category_1+' '+str(round(input.Category_1_Values*100))+'%'+')'
elif((input.Category_3_Values==0) and (input.Category_2_Values!=0)):
return '('+input.Category_1+' '+str(round(input.Category_1_Values*100))+'%, '+input.Category_2+' '+str(round(input.Category_2_Values*100))+'%)'
else:
return '('+input.Category_1+' '+str(round(input.Category_1_Values*100))+'%, '+input.Category_2+' '+str(round(input.Category_2_Values*100))+'%, '+input.Category_3+' '+str(round(input.Category_3_Values*100))+'%)'
Report_extrat=pd.concat([NewMergedDSAligned2[['Name','Institution','FundingFrom','FundingEnd', 'Category','Category_1_Values','Category_2_Values','Category_3_Values','Category_1','Category_2','Category_3','DBScanCluster']],Doc2VecModelData[['tsne-2d-one', 'tsne-2d-two']]], axis=1)
Report_extrat['ProjectURL']=NewMergedDSAligned2.SubUrl.apply(lambda x:'https://gepris.dfg.de'+x)
Report_extrat['label']=Report_extrat.apply(label_genarator, axis=1)
Report_extrat['interdiscipilinary']=False
Report_extrat.loc[(Report_extrat.label!='') & (NewMergedDSAligned2['DBScanCluster']!=0),'interdiscipilinary']=True
Report_extrat['color']=Report_extrat['Category']
Report_extrat.loc[Report_extrat['interdiscipilinary'],'color']=Report_extrat.loc[Report_extrat['interdiscipilinary'],'Category_1']
Report_extrat.to_csv(Path+'Report_WEPCADBScanFindingsKMeansV2.csv', index=False)
newrange.to_csv(Path+'CATRANGE_WEPCADBScanFindingsKMeansV2.csv', index=False)
###Output
_____no_output_____ |
doc/source/ray-air/examples/rl_online_example.ipynb | ###Markdown
Online reinforcement learning with Ray AIRIn this example, we'll train a reinforcement learning agent using online training.Online trainig means that the data from the environment is sampled while we are running the algorithm. In contrast, offline training uses data that has been stored on disk before. Let's start with installing our dependencies:
###Code
!pip install -qU "ray[rllib]" gym
###Output
_____no_output_____
###Markdown
Now we can run some imports:
###Code
import argparse
import gym
import os
import numpy as np
import ray
from ray.ml import Checkpoint
from ray.ml.config import RunConfig
from ray.ml.predictors.integrations.rl.rl_predictor import RLPredictor
from ray.ml.train.integrations.rl.rl_trainer import RLTrainer
from ray.ml.result import Result
from ray.rllib.agents.marwil import BCTrainer
from ray.tune.tuner import Tuner
###Output
2022-05-19 13:54:16,520 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:54:16,531 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.marwil` has been deprecated. Use `ray.rllib.algorithms.marwil` instead. This will raise an error in the future!
###Markdown
Here we define the training function. It will create an `RLTrainer` using the `PPO` algorithm and kick off training on the `CartPole-v0` environment:
###Code
def train_rl_ppo_online(num_workers: int, use_gpu: bool = False) -> Result:
print("Starting online training")
trainer = RLTrainer(
run_config=RunConfig(stop={"training_iteration": 5}),
scaling_config={
"num_workers": num_workers,
"use_gpu": use_gpu,
},
algorithm="PPO",
config={
"env": "CartPole-v0",
"framework": "tf",
},
)
# Todo (krfricke/xwjiang): Enable checkpoint config in RunConfig
# result = trainer.fit()
tuner = Tuner(
trainer,
_tuner_kwargs={"checkpoint_at_end": True},
)
result = tuner.fit()[0]
return result
###Output
_____no_output_____
###Markdown
Once we trained our RL policy, we want to evaluate it on a fresh environment. For this, we will also define a utility function:
###Code
def evaluate_using_checkpoint(checkpoint: Checkpoint, num_episodes) -> list:
predictor = RLPredictor.from_checkpoint(checkpoint)
env = gym.make("CartPole-v0")
rewards = []
for i in range(num_episodes):
obs = env.reset()
reward = 0.0
done = False
while not done:
action = predictor.predict([obs])
obs, r, done, _ = env.step(action[0])
reward += r
rewards.append(reward)
return rewards
###Output
_____no_output_____
###Markdown
Let's put it all together. First, we run training:
###Code
result = train_rl_ppo_online(num_workers=2, use_gpu=False)
###Output
2022-05-19 13:54:16,582 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.dqn.dqn.DEFAULT_CONFIG` has been deprecated. Use `ray.rllib.agents.dqn.dqn.DQNConfig(...)` instead. This will raise an error in the future!
###Markdown
And then, using the obtained checkpoint, we evaluate the policy on a fresh environment:
###Code
num_eval_episodes = 3
rewards = evaluate_using_checkpoint(result.checkpoint, num_episodes=num_eval_episodes)
print(f"Average reward over {num_eval_episodes} episodes: " f"{np.mean(rewards)}")
###Output
2022-05-19 13:54:58,589 INFO trainer.py:1728 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
2022-05-19 13:54:58,590 WARNING deprecation.py:47 -- DeprecationWarning: `simple_optimizer` has been deprecated. This will raise an error in the future!
2022-05-19 13:54:58,591 INFO ppo.py:361 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
2022-05-19 13:54:58,591 INFO trainer.py:328 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
[2m[36m(RolloutWorker pid=14191)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
[2m[36m(RolloutWorker pid=14192)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:55:07,968 WARNING util.py:65 -- Install gputil for GPU system monitoring.
2022-05-19 13:55:08,021 INFO trainable.py:589 -- Restored on 127.0.0.1 from checkpoint: /Users/kai/ray_results/AIRPPOTrainer_2022-05-19_13-54-16/AIRPPOTrainer_cd8d6_00000_0_2022-05-19_13-54-22/checkpoint_000005/checkpoint-5
2022-05-19 13:55:08,021 INFO trainable.py:597 -- Current state after restoring: {'_iteration': 5, '_timesteps_total': None, '_time_total': 16.702913284301758, '_episodes_total': 354}
###Markdown
Online reinforcement learning with Ray AIRIn this example, we'll train a reinforcement learning agent using online training.Online trainig means that the data from the environment is sampled while we are running the algorithm. In contrast, offline training uses data that has been stored on disk before. Let's start with installing our dependencies:
###Code
!pip install -qU "ray[rllib]" gym
###Output
_____no_output_____
###Markdown
Now we can run some imports:
###Code
import argparse
import gym
import os
import numpy as np
import ray
from ray.air import Checkpoint
from ray.air.config import RunConfig
from ray.air.predictors.integrations.rl.rl_predictor import RLPredictor
from ray.air.train.integrations.rl.rl_trainer import RLTrainer
from ray.air.result import Result
from ray.rllib.agents.marwil import BCTrainer
from ray.tune.tuner import Tuner
###Output
2022-05-19 13:54:16,520 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:54:16,531 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.marwil` has been deprecated. Use `ray.rllib.algorithms.marwil` instead. This will raise an error in the future!
###Markdown
Here we define the training function. It will create an `RLTrainer` using the `PPO` algorithm and kick off training on the `CartPole-v0` environment:
###Code
def train_rl_ppo_online(num_workers: int, use_gpu: bool = False) -> Result:
print("Starting online training")
trainer = RLTrainer(
run_config=RunConfig(stop={"training_iteration": 5}),
scaling_config={
"num_workers": num_workers,
"use_gpu": use_gpu,
},
algorithm="PPO",
config={
"env": "CartPole-v0",
"framework": "tf",
},
)
# Todo (krfricke/xwjiang): Enable checkpoint config in RunConfig
# result = trainer.fit()
tuner = Tuner(
trainer,
_tuner_kwargs={"checkpoint_at_end": True},
)
result = tuner.fit()[0]
return result
###Output
_____no_output_____
###Markdown
Once we trained our RL policy, we want to evaluate it on a fresh environment. For this, we will also define a utility function:
###Code
def evaluate_using_checkpoint(checkpoint: Checkpoint, num_episodes) -> list:
predictor = RLPredictor.from_checkpoint(checkpoint)
env = gym.make("CartPole-v0")
rewards = []
for i in range(num_episodes):
obs = env.reset()
reward = 0.0
done = False
while not done:
action = predictor.predict([obs])
obs, r, done, _ = env.step(action[0])
reward += r
rewards.append(reward)
return rewards
###Output
_____no_output_____
###Markdown
Let's put it all together. First, we run training:
###Code
result = train_rl_ppo_online(num_workers=2, use_gpu=False)
###Output
2022-05-19 13:54:16,582 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.dqn.dqn.DEFAULT_CONFIG` has been deprecated. Use `ray.rllib.agents.dqn.dqn.DQNConfig(...)` instead. This will raise an error in the future!
###Markdown
And then, using the obtained checkpoint, we evaluate the policy on a fresh environment:
###Code
num_eval_episodes = 3
rewards = evaluate_using_checkpoint(result.checkpoint, num_episodes=num_eval_episodes)
print(f"Average reward over {num_eval_episodes} episodes: " f"{np.mean(rewards)}")
###Output
2022-05-19 13:54:58,589 INFO trainer.py:1728 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
2022-05-19 13:54:58,590 WARNING deprecation.py:47 -- DeprecationWarning: `simple_optimizer` has been deprecated. This will raise an error in the future!
2022-05-19 13:54:58,591 INFO ppo.py:361 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
2022-05-19 13:54:58,591 INFO trainer.py:328 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
[2m[36m(RolloutWorker pid=14191)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
[2m[36m(RolloutWorker pid=14192)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:55:07,968 WARNING util.py:65 -- Install gputil for GPU system monitoring.
2022-05-19 13:55:08,021 INFO trainable.py:589 -- Restored on 127.0.0.1 from checkpoint: /Users/kai/ray_results/AIRPPOTrainer_2022-05-19_13-54-16/AIRPPOTrainer_cd8d6_00000_0_2022-05-19_13-54-22/checkpoint_000005/checkpoint-5
2022-05-19 13:55:08,021 INFO trainable.py:597 -- Current state after restoring: {'_iteration': 5, '_timesteps_total': None, '_time_total': 16.702913284301758, '_episodes_total': 354}
###Markdown
Online reinforcement learning with Ray AIRIn this example, we'll train a reinforcement learning agent using online training.Online training means that the data from the environment is sampled while we are running the algorithm. In contrast, offline training uses data that has been stored on disk before. Let's start with installing our dependencies:
###Code
!pip install -qU "ray[rllib]" gym
###Output
_____no_output_____
###Markdown
Now we can run some imports:
###Code
import argparse
import gym
import os
import numpy as np
import ray
from ray.air import Checkpoint
from ray.air.config import RunConfig
from ray.air.predictors.integrations.rl.rl_predictor import RLPredictor
from ray.train.rl.rl_trainer import RLTrainer
from ray.air.result import Result
from ray.rllib.agents.marwil import BCTrainer
from ray.tune.tuner import Tuner
###Output
2022-05-19 13:54:16,520 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:54:16,531 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.marwil` has been deprecated. Use `ray.rllib.algorithms.marwil` instead. This will raise an error in the future!
###Markdown
Here we define the training function. It will create an `RLTrainer` using the `PPO` algorithm and kick off training on the `CartPole-v0` environment:
###Code
def train_rl_ppo_online(num_workers: int, use_gpu: bool = False) -> Result:
print("Starting online training")
trainer = RLTrainer(
run_config=RunConfig(stop={"training_iteration": 5}),
scaling_config={
"num_workers": num_workers,
"use_gpu": use_gpu,
},
algorithm="PPO",
config={
"env": "CartPole-v0",
"framework": "tf",
},
)
# Todo (krfricke/xwjiang): Enable checkpoint config in RunConfig
# result = trainer.fit()
tuner = Tuner(
trainer,
_tuner_kwargs={"checkpoint_at_end": True},
)
result = tuner.fit()[0]
return result
###Output
_____no_output_____
###Markdown
Once we trained our RL policy, we want to evaluate it on a fresh environment. For this, we will also define a utility function:
###Code
def evaluate_using_checkpoint(checkpoint: Checkpoint, num_episodes) -> list:
predictor = RLPredictor.from_checkpoint(checkpoint)
env = gym.make("CartPole-v0")
rewards = []
for i in range(num_episodes):
obs = env.reset()
reward = 0.0
done = False
while not done:
action = predictor.predict([obs])
obs, r, done, _ = env.step(action[0])
reward += r
rewards.append(reward)
return rewards
###Output
_____no_output_____
###Markdown
Let's put it all together. First, we run training:
###Code
result = train_rl_ppo_online(num_workers=2, use_gpu=False)
###Output
2022-05-19 13:54:16,582 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.dqn.dqn.DEFAULT_CONFIG` has been deprecated. Use `ray.rllib.agents.dqn.dqn.DQNConfig(...)` instead. This will raise an error in the future!
###Markdown
And then, using the obtained checkpoint, we evaluate the policy on a fresh environment:
###Code
num_eval_episodes = 3
rewards = evaluate_using_checkpoint(result.checkpoint, num_episodes=num_eval_episodes)
print(f"Average reward over {num_eval_episodes} episodes: " f"{np.mean(rewards)}")
###Output
2022-05-19 13:54:58,589 INFO trainer.py:1728 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
2022-05-19 13:54:58,590 WARNING deprecation.py:47 -- DeprecationWarning: `simple_optimizer` has been deprecated. This will raise an error in the future!
2022-05-19 13:54:58,591 INFO ppo.py:361 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
2022-05-19 13:54:58,591 INFO trainer.py:328 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
[2m[36m(RolloutWorker pid=14191)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
[2m[36m(RolloutWorker pid=14192)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:55:07,968 WARNING util.py:65 -- Install gputil for GPU system monitoring.
2022-05-19 13:55:08,021 INFO trainable.py:589 -- Restored on 127.0.0.1 from checkpoint: /Users/kai/ray_results/AIRPPOTrainer_2022-05-19_13-54-16/AIRPPOTrainer_cd8d6_00000_0_2022-05-19_13-54-22/checkpoint_000005/checkpoint-5
2022-05-19 13:55:08,021 INFO trainable.py:597 -- Current state after restoring: {'_iteration': 5, '_timesteps_total': None, '_time_total': 16.702913284301758, '_episodes_total': 354}
###Markdown
Online reinforcement learning with Ray AIRIn this example, we'll train a reinforcement learning agent using online training.Online training means that the data from the environment is sampled while we are running the algorithm. In contrast, offline training uses data that has been stored on disk before. Let's start with installing our dependencies:
###Code
!pip install -qU "ray[rllib]" gym
###Output
_____no_output_____
###Markdown
Now we can run some imports:
###Code
import argparse
import gym
import os
import numpy as np
import ray
from ray.air import Checkpoint
from ray.air.config import RunConfig
from ray.train.rl.rl_predictor import RLPredictor
from ray.train.rl.rl_trainer import RLTrainer
from ray.air.result import Result
from ray.rllib.agents.marwil import BCTrainer
from ray.tune.tuner import Tuner
###Output
2022-05-19 13:54:16,520 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:54:16,531 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.marwil` has been deprecated. Use `ray.rllib.algorithms.marwil` instead. This will raise an error in the future!
###Markdown
Here we define the training function. It will create an `RLTrainer` using the `PPO` algorithm and kick off training on the `CartPole-v0` environment:
###Code
def train_rl_ppo_online(num_workers: int, use_gpu: bool = False) -> Result:
print("Starting online training")
trainer = RLTrainer(
run_config=RunConfig(stop={"training_iteration": 5}),
scaling_config={
"num_workers": num_workers,
"use_gpu": use_gpu,
},
algorithm="PPO",
config={
"env": "CartPole-v0",
"framework": "tf",
},
)
# Todo (krfricke/xwjiang): Enable checkpoint config in RunConfig
# result = trainer.fit()
tuner = Tuner(
trainer,
_tuner_kwargs={"checkpoint_at_end": True},
)
result = tuner.fit()[0]
return result
###Output
_____no_output_____
###Markdown
Once we trained our RL policy, we want to evaluate it on a fresh environment. For this, we will also define a utility function:
###Code
def evaluate_using_checkpoint(checkpoint: Checkpoint, num_episodes) -> list:
predictor = RLPredictor.from_checkpoint(checkpoint)
env = gym.make("CartPole-v0")
rewards = []
for i in range(num_episodes):
obs = env.reset()
reward = 0.0
done = False
while not done:
action = predictor.predict([obs])
obs, r, done, _ = env.step(action[0])
reward += r
rewards.append(reward)
return rewards
###Output
_____no_output_____
###Markdown
Let's put it all together. First, we run training:
###Code
result = train_rl_ppo_online(num_workers=2, use_gpu=False)
###Output
2022-05-19 13:54:16,582 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.dqn.dqn.DEFAULT_CONFIG` has been deprecated. Use `ray.rllib.agents.dqn.dqn.DQNConfig(...)` instead. This will raise an error in the future!
###Markdown
And then, using the obtained checkpoint, we evaluate the policy on a fresh environment:
###Code
num_eval_episodes = 3
rewards = evaluate_using_checkpoint(result.checkpoint, num_episodes=num_eval_episodes)
print(f"Average reward over {num_eval_episodes} episodes: " f"{np.mean(rewards)}")
###Output
2022-05-19 13:54:58,589 INFO trainer.py:1728 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
2022-05-19 13:54:58,590 WARNING deprecation.py:47 -- DeprecationWarning: `simple_optimizer` has been deprecated. This will raise an error in the future!
2022-05-19 13:54:58,591 INFO ppo.py:361 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
2022-05-19 13:54:58,591 INFO trainer.py:328 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
[2m[36m(RolloutWorker pid=14191)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
[2m[36m(RolloutWorker pid=14192)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:55:07,968 WARNING util.py:65 -- Install gputil for GPU system monitoring.
2022-05-19 13:55:08,021 INFO trainable.py:589 -- Restored on 127.0.0.1 from checkpoint: /Users/kai/ray_results/AIRPPOTrainer_2022-05-19_13-54-16/AIRPPOTrainer_cd8d6_00000_0_2022-05-19_13-54-22/checkpoint_000005/checkpoint-5
2022-05-19 13:55:08,021 INFO trainable.py:597 -- Current state after restoring: {'_iteration': 5, '_timesteps_total': None, '_time_total': 16.702913284301758, '_episodes_total': 354}
###Markdown
Online reinforcement learning with Ray AIRIn this example, we'll train a reinforcement learning agent using online training.Online training means that the data from the environment is sampled while we are running the algorithm. In contrast, offline training uses data that has been stored on disk before. Let's start with installing our dependencies:
###Code
!pip install -qU "ray[rllib]" gym
###Output
_____no_output_____
###Markdown
Now we can run some imports:
###Code
import argparse
import gym
import os
import numpy as np
import ray
from ray.air import Checkpoint
from ray.air.config import RunConfig
from ray.air.predictors.integrations.rl.rl_predictor import RLPredictor
from ray.air.train.integrations.rl.rl_trainer import RLTrainer
from ray.air.result import Result
from ray.rllib.agents.marwil import BCTrainer
from ray.tune.tuner import Tuner
###Output
2022-05-19 13:54:16,520 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:54:16,531 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.marwil` has been deprecated. Use `ray.rllib.algorithms.marwil` instead. This will raise an error in the future!
###Markdown
Here we define the training function. It will create an `RLTrainer` using the `PPO` algorithm and kick off training on the `CartPole-v0` environment:
###Code
def train_rl_ppo_online(num_workers: int, use_gpu: bool = False) -> Result:
print("Starting online training")
trainer = RLTrainer(
run_config=RunConfig(stop={"training_iteration": 5}),
scaling_config={
"num_workers": num_workers,
"use_gpu": use_gpu,
},
algorithm="PPO",
config={
"env": "CartPole-v0",
"framework": "tf",
},
)
# Todo (krfricke/xwjiang): Enable checkpoint config in RunConfig
# result = trainer.fit()
tuner = Tuner(
trainer,
_tuner_kwargs={"checkpoint_at_end": True},
)
result = tuner.fit()[0]
return result
###Output
_____no_output_____
###Markdown
Once we trained our RL policy, we want to evaluate it on a fresh environment. For this, we will also define a utility function:
###Code
def evaluate_using_checkpoint(checkpoint: Checkpoint, num_episodes) -> list:
predictor = RLPredictor.from_checkpoint(checkpoint)
env = gym.make("CartPole-v0")
rewards = []
for i in range(num_episodes):
obs = env.reset()
reward = 0.0
done = False
while not done:
action = predictor.predict([obs])
obs, r, done, _ = env.step(action[0])
reward += r
rewards.append(reward)
return rewards
###Output
_____no_output_____
###Markdown
Let's put it all together. First, we run training:
###Code
result = train_rl_ppo_online(num_workers=2, use_gpu=False)
###Output
2022-05-19 13:54:16,582 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.agents.dqn.dqn.DEFAULT_CONFIG` has been deprecated. Use `ray.rllib.agents.dqn.dqn.DQNConfig(...)` instead. This will raise an error in the future!
###Markdown
And then, using the obtained checkpoint, we evaluate the policy on a fresh environment:
###Code
num_eval_episodes = 3
rewards = evaluate_using_checkpoint(result.checkpoint, num_episodes=num_eval_episodes)
print(f"Average reward over {num_eval_episodes} episodes: " f"{np.mean(rewards)}")
###Output
2022-05-19 13:54:58,589 INFO trainer.py:1728 -- Your framework setting is 'tf', meaning you are using static-graph mode. Set framework='tf2' to enable eager execution with tf2.x. You may also then want to set eager_tracing=True in order to reach similar execution speed as with static-graph mode.
2022-05-19 13:54:58,590 WARNING deprecation.py:47 -- DeprecationWarning: `simple_optimizer` has been deprecated. This will raise an error in the future!
2022-05-19 13:54:58,591 INFO ppo.py:361 -- In multi-agent mode, policies will be optimized sequentially by the multi-GPU optimizer. Consider setting simple_optimizer=True if this doesn't work for you.
2022-05-19 13:54:58,591 INFO trainer.py:328 -- Current log_level is WARN. For more information, set 'log_level': 'INFO' / 'DEBUG' or use the -v and -vv flags.
[2m[36m(RolloutWorker pid=14191)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
[2m[36m(RolloutWorker pid=14192)[0m 2022-05-19 13:55:06,622 WARNING deprecation.py:47 -- DeprecationWarning: `ray.rllib.execution.buffers` has been deprecated. Use `ray.rllib.utils.replay_buffers` instead. This will raise an error in the future!
2022-05-19 13:55:07,968 WARNING util.py:65 -- Install gputil for GPU system monitoring.
2022-05-19 13:55:08,021 INFO trainable.py:589 -- Restored on 127.0.0.1 from checkpoint: /Users/kai/ray_results/AIRPPOTrainer_2022-05-19_13-54-16/AIRPPOTrainer_cd8d6_00000_0_2022-05-19_13-54-22/checkpoint_000005/checkpoint-5
2022-05-19 13:55:08,021 INFO trainable.py:597 -- Current state after restoring: {'_iteration': 5, '_timesteps_total': None, '_time_total': 16.702913284301758, '_episodes_total': 354}
|
In-Db2-ML-Experiment-master/In-Db2-ML-Experiment-master/CreditCard-Notebook-Predict.ipynb | ###Markdown
Predicting Credit Card Fraud using Jupyter Notebook
###Code
import ibm_db
import ibm_db_dbi
from time import time
import pandas as pd
from joblib import dump, load
# Connect to Db2
t0=time()
conn_str = "DATABASE=CRCARD;" + \
"HOSTNAME=entb06.canlab.ibm.com;"+ \
"PROTOCOL=TCPIP;" + \
"PORT=50000;" + \
"UID=PERFPOL2;" + \
"PWD=blu4speed;"
ibm_db_conn = ibm_db.connect(conn_str,"","")
conn = ibm_db_dbi.Connection(ibm_db_conn)
print('Connection to Db2 Instance Created!')
## Load testing data from Db2
sql = 'SELECT * FROM CC_PREDICT_SCALED' #CREDIT_CARD_PREDICTION
X_test = pd.read_sql(sql,conn)
print('Successfully pulled test data from Db2!')
# Load model + scaler
saved_model = load('test/saved_model.joblib') #/data2/home/apu/saved_model.joblib
print('Model loaded successfully!')
# saved_scaler = load('test/saved_scaler.joblib') #/data2/home/apu/saved_scaler.joblib
# print('Scaler loaded successfully!')
# # Scale AMOUNT column
# X_test['AMOUNT_SCALED'] = saved_scaler.transform(X_test['AMOUNT_SCALED'].values.reshape(-1,1))
###Output
_____no_output_____
###Markdown
Change `num_rows` variable to set prediction batch size
###Code
# Use saved model to make a prediction on the test set
num_rows=100000
y_pred_saved = saved_model.predict(X_test.sample(n=num_rows))
t1 = time()
tot_time = t1-t0
print('It took', round(tot_time, 3),'s to make a prediction on', num_rows,'instances.')
###Output
It took 6.246 s to make a prediction on 100000 instances.
|
samples/nucleus/inspect_nucleus_data.ipynb | ###Markdown
Inspect Nucleus Training DataInspect and visualize data loading and pre-processing code.https://www.kaggle.com/c/data-science-bowl-2018
###Code
import os
import sys
import itertools
import math
import logging
import json
import re
import random
import time
import concurrent.futures
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
import imgaug
from imgaug import augmenters as iaa
# Root directory of the project
ROOT_DIR = os.getcwd()
if ROOT_DIR.endswith("samples/nucleus"):
# Go up two levels to the repo root
ROOT_DIR = os.path.dirname(os.path.dirname(ROOT_DIR))
# Import Mask RCNN
sys.path.append(ROOT_DIR)
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
from mrcnn import model as modellib
from mrcnn.model import log
import nucleus
%matplotlib inline
# Comment out to reload imported modules if they change
# %load_ext autoreload
# %autoreload 2
###Output
_____no_output_____
###Markdown
Configurations
###Code
# Dataset directory
DATASET_DIR = os.path.join(ROOT_DIR, "datasets/nucleus")
# Use configuation from nucleus.py, but override
# image resizing so we see the real sizes here
class NoResizeConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "none"
config = NoResizeConfig()
###Output
_____no_output_____
###Markdown
Notebook Preferences
###Code
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
DatasetDownload the dataset from the competition Website. Unzip it and save it in `mask_rcnn/datasets/nucleus`. If you prefer a different directory then change the `DATASET_DIR` variable above.https://www.kaggle.com/c/data-science-bowl-2018/data
###Code
# Load dataset
dataset = nucleus.NucleusDataset()
# The subset is the name of the sub-directory, such as stage1_train,
# stage1_test, ...etc. You can also use these special values:
# train: loads stage1_train but excludes validation images
# val: loads validation images from stage1_train. For a list
# of validation images see nucleus.py
dataset.load_nucleus(DATASET_DIR, subset="train")
# Must call before using the dataset
dataset.prepare()
print("Image Count: {}".format(len(dataset.image_ids)))
print("Class Count: {}".format(dataset.num_classes))
for i, info in enumerate(dataset.class_info):
print("{:3}. {:50}".format(i, info['name']))
###Output
Image Count: 645
Class Count: 2
0. BG
1. nucleus
###Markdown
Display Samples
###Code
# Load and display random samples
image_ids = np.random.choice(dataset.image_ids, 4)
for image_id in image_ids:
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset.class_names, limit=1)
# Example of loading a specific image by its source ID
source_id = "ed5be4b63e9506ad64660dd92a098ffcc0325195298c13c815a73773f1efc279"
# Map source ID to Dataset image_id
# Notice the nucleus prefix: it's the name given to the dataset in NucleusDataset
image_id = dataset.image_from_source_map["nucleus.{}".format(source_id)]
# Load and display
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("molded_image", image)
log("mask", mask)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names,
show_bbox=False)
###Output
molded_image shape: (256, 320, 3) min: 28.00000 max: 232.00000 uint8
mask shape: (56, 56, 42) min: 0.00000 max: 1.00000 bool
###Markdown
Dataset StatsLoop through all images in the dataset and collect aggregate stats.
###Code
def image_stats(image_id):
"""Returns a dict of stats for one image."""
image = dataset.load_image(image_id)
mask, _ = dataset.load_mask(image_id)
bbox = utils.extract_bboxes(mask)
# Sanity check
assert mask.shape[:2] == image.shape[:2]
# Return stats dict
return {
"id": image_id,
"shape": list(image.shape),
"bbox": [[b[2] - b[0], b[3] - b[1]]
for b in bbox
# Uncomment to exclude nuclei with 1 pixel width
# or height (often on edges)
# if b[2] - b[0] > 1 and b[3] - b[1] > 1
],
"color": np.mean(image, axis=(0, 1)),
}
# Loop through the dataset and compute stats over multiple threads
# This might take a few minutes
t_start = time.time()
with concurrent.futures.ThreadPoolExecutor() as e:
stats = list(e.map(image_stats, dataset.image_ids))
t_total = time.time() - t_start
print("Total time: {:.1f} seconds".format(t_total))
###Output
_____no_output_____
###Markdown
Image Size Stats
###Code
# Image stats
image_shape = np.array([s['shape'] for s in stats])
image_color = np.array([s['color'] for s in stats])
print("Image Count: ", image_shape.shape[0])
print("Height mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 0]), np.median(image_shape[:, 0]),
np.min(image_shape[:, 0]), np.max(image_shape[:, 0])))
print("Width mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 1]), np.median(image_shape[:, 1]),
np.min(image_shape[:, 1]), np.max(image_shape[:, 1])))
print("Color mean (RGB): {:.2f} {:.2f} {:.2f}".format(*np.mean(image_color, axis=0)))
# Histograms
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
ax[0].set_title("Height")
_ = ax[0].hist(image_shape[:, 0], bins=20)
ax[1].set_title("Width")
_ = ax[1].hist(image_shape[:, 1], bins=20)
ax[2].set_title("Height & Width")
_ = ax[2].hist2d(image_shape[:, 1], image_shape[:, 0], bins=10, cmap="Blues")
###Output
_____no_output_____
###Markdown
Nuclei per Image Stats
###Code
# Segment by image area
image_area_bins = [256**2, 600**2, 1300**2]
print("Nuclei/Image")
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nuclei_per_image = np.array([len(s['bbox'])
for s in stats
if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area])
area_threshold = image_area
if len(nuclei_per_image) == 0:
print("Image area <= {:4}**2: None".format(np.sqrt(image_area)))
continue
print("Image area <= {:4.0f}**2: mean: {:.1f} median: {:.1f} min: {:.1f} max: {:.1f}".format(
np.sqrt(image_area), nuclei_per_image.mean(), np.median(nuclei_per_image),
nuclei_per_image.min(), nuclei_per_image.max()))
ax[i].set_title("Image Area <= {:4}**2".format(np.sqrt(image_area)))
_ = ax[i].hist(nuclei_per_image, bins=10)
###Output
_____no_output_____
###Markdown
Nuclei Size Stats
###Code
# Nuclei size stats
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nucleus_shape = np.array([
b
for s in stats if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area
for b in s['bbox']])
nucleus_area = nucleus_shape[:, 0] * nucleus_shape[:, 1]
area_threshold = image_area
print("\nImage Area <= {:.0f}**2".format(np.sqrt(image_area)))
print(" Total Nuclei: ", nucleus_shape.shape[0])
print(" Nucleus Height. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 0]), np.median(nucleus_shape[:, 0]),
np.min(nucleus_shape[:, 0]), np.max(nucleus_shape[:, 0])))
print(" Nucleus Width. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 1]), np.median(nucleus_shape[:, 1]),
np.min(nucleus_shape[:, 1]), np.max(nucleus_shape[:, 1])))
print(" Nucleus Area. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_area), np.median(nucleus_area),
np.min(nucleus_area), np.max(nucleus_area)))
# Show 2D histogram
_ = ax[i].hist2d(nucleus_shape[:, 1], nucleus_shape[:, 0], bins=20, cmap="Blues")
# Nuclei height/width ratio
nucleus_aspect_ratio = nucleus_shape[:, 0] / nucleus_shape[:, 1]
print("Nucleus Aspect Ratio. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_aspect_ratio), np.median(nucleus_aspect_ratio),
np.min(nucleus_aspect_ratio), np.max(nucleus_aspect_ratio)))
plt.figure(figsize=(15, 5))
_ = plt.hist(nucleus_aspect_ratio, bins=100, range=[0, 5])
###Output
_____no_output_____
###Markdown
Image AugmentationTest out different augmentation methods
###Code
# List of augmentations
# http://imgaug.readthedocs.io/en/latest/source/augmenters.html
augmentation = iaa.Sometimes(0.9, [
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
iaa.Multiply((0.8, 1.2)),
iaa.GaussianBlur(sigma=(0.0, 5.0))
])
# Load the image multiple times to show augmentations
limit = 4
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False, augment=False, augmentation=augmentation)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Image CropsMicroscoy images tend to be large, but nuclei are small. So it's more efficient to train on random crops from large images. This is handled by `config.IMAGE_RESIZE_MODE = "crop"`.
###Code
class RandomCropConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "crop"
IMAGE_MIN_DIM = 256
IMAGE_MAX_DIM = 256
crop_config = RandomCropConfig()
# Load the image multiple times to show augmentations
limit = 4
image_id = np.random.choice(dataset.image_ids, 1)[0]
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, crop_config, image_id, use_mini_mask=False)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Mini MasksInstance binary masks can get large when training with high resolution images. For example, if training with 1024x1024 image then the mask of a single instance requires 1MB of memory (Numpy uses bytes for boolean values). If an image has 100 instances then that's 100MB for the masks alone. To improve training speed, we optimize masks:* We store mask pixels that are inside the object bounding box, rather than a mask of the full image. Most objects are small compared to the image size, so we save space by not storing a lot of zeros around the object.* We resize the mask to a smaller size (e.g. 56x56). For objects that are larger than the selected size we lose a bit of accuracy. But most object annotations are not very accuracy to begin with, so this loss is negligable for most practical purposes. Thie size of the mini_mask can be set in the config class.To visualize the effect of mask resizing, and to verify the code correctness, we visualize some examples.
###Code
# Load random image and mask.
image_id = np.random.choice(dataset.image_ids, 1)[0]
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
original_shape = image.shape
# Resize
image, window, scale, padding, _ = utils.resize_image(
image,
min_dim=config.IMAGE_MIN_DIM,
max_dim=config.IMAGE_MAX_DIM,
mode=config.IMAGE_RESIZE_MODE)
mask = utils.resize_mask(mask, scale, padding)
# Compute Bounding box
bbox = utils.extract_bboxes(mask)
# Display image and additional stats
print("image_id: ", image_id, dataset.image_reference(image_id))
print("Original shape: ", original_shape)
log("image", image)
log("mask", mask)
log("class_ids", class_ids)
log("bbox", bbox)
# Display image and instances
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("image", image)
log("image_meta", image_meta)
log("class_ids", class_ids)
log("bbox", bbox)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
# Add augmentation and mask resizing.
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, augment=True, use_mini_mask=True)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
mask = utils.expand_mask(bbox, mask, image.shape)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
###Output
_____no_output_____
###Markdown
AnchorsFor an FPN network, the anchors must be ordered in a way that makes it easy to match anchors to the output of the convolution layers that predict anchor scores and shifts. * Sort by pyramid level first. All anchors of the first level, then all of the second and so on. This makes it easier to separate anchors by level.* Within each level, sort anchors by feature map processing sequence. Typically, a convolution layer processes a feature map starting from top-left and moving right row by row. * For each feature map cell, pick any sorting order for the anchors of different ratios. Here we match the order of ratios passed to the function.
###Code
## Visualize anchors of one cell at the center of the feature map
# Load and display random image
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, _, _, _ = modellib.load_image_gt(dataset, crop_config, image_id)
# Generate Anchors
backbone_shapes = modellib.compute_backbone_shapes(config, image.shape)
anchors = utils.generate_pyramid_anchors(config.RPN_ANCHOR_SCALES,
config.RPN_ANCHOR_RATIOS,
backbone_shapes,
config.BACKBONE_STRIDES,
config.RPN_ANCHOR_STRIDE)
# Print summary of anchors
num_levels = len(backbone_shapes)
anchors_per_cell = len(config.RPN_ANCHOR_RATIOS)
print("Count: ", anchors.shape[0])
print("Scales: ", config.RPN_ANCHOR_SCALES)
print("ratios: ", config.RPN_ANCHOR_RATIOS)
print("Anchors per Cell: ", anchors_per_cell)
print("Levels: ", num_levels)
anchors_per_level = []
for l in range(num_levels):
num_cells = backbone_shapes[l][0] * backbone_shapes[l][1]
anchors_per_level.append(anchors_per_cell * num_cells // config.RPN_ANCHOR_STRIDE**2)
print("Anchors in Level {}: {}".format(l, anchors_per_level[l]))
# Display
fig, ax = plt.subplots(1, figsize=(10, 10))
ax.imshow(image)
levels = len(backbone_shapes)
for level in range(levels):
colors = visualize.random_colors(levels)
# Compute the index of the anchors at the center of the image
level_start = sum(anchors_per_level[:level]) # sum of anchors of previous levels
level_anchors = anchors[level_start:level_start+anchors_per_level[level]]
print("Level {}. Anchors: {:6} Feature map Shape: {}".format(level, level_anchors.shape[0],
backbone_shapes[level]))
center_cell = backbone_shapes[level] // 2
center_cell_index = (center_cell[0] * backbone_shapes[level][1] + center_cell[1])
level_center = center_cell_index * anchors_per_cell
center_anchor = anchors_per_cell * (
(center_cell[0] * backbone_shapes[level][1] / config.RPN_ANCHOR_STRIDE**2) \
+ center_cell[1] / config.RPN_ANCHOR_STRIDE)
level_center = int(center_anchor)
# Draw anchors. Brightness show the order in the array, dark to bright.
for i, rect in enumerate(level_anchors[level_center:level_center+anchors_per_cell]):
y1, x1, y2, x2 = rect
p = patches.Rectangle((x1, y1), x2-x1, y2-y1, linewidth=2, facecolor='none',
edgecolor=(i+1)*np.array(colors[level]) / anchors_per_cell)
ax.add_patch(p)
###Output
_____no_output_____
###Markdown
Data Generator
###Code
# Create data generator
random_rois = 2000
g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=random_rois,
batch_size=4,
detection_targets=True)
# Uncomment to run the generator through a lot of images
# to catch rare errors
# for i in range(1000):
# print(i)
# _, _ = next(g)
# Get Next Image
if random_rois:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_class_ids, gt_boxes, gt_masks, rpn_rois, rois], \
[mrcnn_class_ids, mrcnn_bbox, mrcnn_mask] = next(g)
log("rois", rois)
log("mrcnn_class_ids", mrcnn_class_ids)
log("mrcnn_bbox", mrcnn_bbox)
log("mrcnn_mask", mrcnn_mask)
else:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_boxes, gt_masks], _ = next(g)
log("gt_class_ids", gt_class_ids)
log("gt_boxes", gt_boxes)
log("gt_masks", gt_masks)
log("rpn_match", rpn_match, )
log("rpn_bbox", rpn_bbox)
image_id = modellib.parse_image_meta(image_meta)["image_id"][0]
print("image_id: ", image_id, dataset.image_reference(image_id))
# Remove the last dim in mrcnn_class_ids. It's only added
# to satisfy Keras restriction on target shape.
mrcnn_class_ids = mrcnn_class_ids[:,:,0]
b = 0
# Restore original image (reverse normalization)
sample_image = modellib.unmold_image(normalized_images[b], config)
# Compute anchor shifts.
indices = np.where(rpn_match[b] == 1)[0]
refined_anchors = utils.apply_box_deltas(anchors[indices], rpn_bbox[b, :len(indices)] * config.RPN_BBOX_STD_DEV)
log("anchors", anchors)
log("refined_anchors", refined_anchors)
# Get list of positive anchors
positive_anchor_ids = np.where(rpn_match[b] == 1)[0]
print("Positive anchors: {}".format(len(positive_anchor_ids)))
negative_anchor_ids = np.where(rpn_match[b] == -1)[0]
print("Negative anchors: {}".format(len(negative_anchor_ids)))
neutral_anchor_ids = np.where(rpn_match[b] == 0)[0]
print("Neutral anchors: {}".format(len(neutral_anchor_ids)))
# ROI breakdown by class
for c, n in zip(dataset.class_names, np.bincount(mrcnn_class_ids[b].flatten())):
if n:
print("{:23}: {}".format(c[:20], n))
# Show positive anchors
fig, ax = plt.subplots(1, figsize=(16, 16))
visualize.draw_boxes(sample_image, boxes=anchors[positive_anchor_ids],
refined_boxes=refined_anchors, ax=ax)
# Show negative anchors
visualize.draw_boxes(sample_image, boxes=anchors[negative_anchor_ids])
# Show neutral anchors. They don't contribute to training.
visualize.draw_boxes(sample_image, boxes=anchors[np.random.choice(neutral_anchor_ids, 100)])
###Output
_____no_output_____
###Markdown
ROIsTypically, the RPN network generates region proposals (a.k.a. Regions of Interest, or ROIs). The data generator has the ability to generate proposals as well for illustration and testing purposes. These are controlled by the `random_rois` parameter.
###Code
if random_rois:
# Class aware bboxes
bbox_specific = mrcnn_bbox[b, np.arange(mrcnn_bbox.shape[1]), mrcnn_class_ids[b], :]
# Refined ROIs
refined_rois = utils.apply_box_deltas(rois[b].astype(np.float32), bbox_specific[:,:4] * config.BBOX_STD_DEV)
# Class aware masks
mask_specific = mrcnn_mask[b, np.arange(mrcnn_mask.shape[1]), :, :, mrcnn_class_ids[b]]
visualize.draw_rois(sample_image, rois[b], refined_rois, mask_specific, mrcnn_class_ids[b], dataset.class_names)
# Any repeated ROIs?
rows = np.ascontiguousarray(rois[b]).view(np.dtype((np.void, rois.dtype.itemsize * rois.shape[-1])))
_, idx = np.unique(rows, return_index=True)
print("Unique ROIs: {} out of {}".format(len(idx), rois.shape[1]))
if random_rois:
# Dispalay ROIs and corresponding masks and bounding boxes
ids = random.sample(range(rois.shape[1]), 8)
images = []
titles = []
for i in ids:
image = visualize.draw_box(sample_image.copy(), rois[b,i,:4].astype(np.int32), [255, 0, 0])
image = visualize.draw_box(image, refined_rois[i].astype(np.int64), [0, 255, 0])
images.append(image)
titles.append("ROI {}".format(i))
images.append(mask_specific[i] * 255)
titles.append(dataset.class_names[mrcnn_class_ids[b,i]][:20])
display_images(images, titles, cols=4, cmap="Blues", interpolation="none")
# Check ratio of positive ROIs in a set of images.
if random_rois:
limit = 10
temp_g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=10000,
batch_size=1, detection_targets=True)
total = 0
for i in range(limit):
_, [ids, _, _] = next(temp_g)
positive_rois = np.sum(ids[0] > 0)
total += positive_rois
print("{:5} {:5.2f}".format(positive_rois, positive_rois/ids.shape[1]))
print("Average percent: {:.2f}".format(total/(limit*ids.shape[1])))
###Output
_____no_output_____
###Markdown
Inspect Nucleus Training DataInspect and visualize data loading and pre-processing code.https://www.kaggle.com/c/data-science-bowl-2018
###Code
import os
import sys
import itertools
import math
import logging
import json
import re
import random
import time
import concurrent.futures
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
import imgaug
from imgaug import augmenters as iaa
# Root directory of the project
ROOT_DIR = os.getcwd()
if ROOT_DIR.endswith("samples/nucleus"):
# Go up two levels to the repo root
ROOT_DIR = os.path.dirname(os.path.dirname(ROOT_DIR))
# Import Mask RCNN
sys.path.append(ROOT_DIR)
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
from mrcnn import model as modellib
from mrcnn.model import log
import nucleus
%matplotlib inline
# Comment out to reload imported modules if they change
# %load_ext autoreload
# %autoreload 2
###Output
_____no_output_____
###Markdown
Configurations
###Code
# Dataset directory
DATASET_DIR = os.path.join(ROOT_DIR, "datasets/nucleus")
# Use configuation from nucleus.py, but override
# image resizing so we see the real sizes here
class NoResizeConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "none"
config = NoResizeConfig()
###Output
_____no_output_____
###Markdown
Notebook Preferences
###Code
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
DatasetDownload the dataset from the competition Website. Unzip it and save it in `mask_rcnn/datasets/nucleus`. If you prefer a different directory then change the `DATASET_DIR` variable above.https://www.kaggle.com/c/data-science-bowl-2018/data
###Code
# Load dataset
dataset = nucleus.NucleusDataset()
# The subset is the name of the sub-directory, such as stage1_train,
# stage1_test, ...etc. You can also use these special values:
# train: loads stage1_train but excludes validation images
# val: loads validation images from stage1_train. For a list
# of validation images see nucleus.py
dataset.load_nucleus(DATASET_DIR, subset="train")
# Must call before using the dataset
dataset.prepare()
print("Image Count: {}".format(len(dataset.image_ids)))
print("Class Count: {}".format(dataset.num_classes))
for i, info in enumerate(dataset.class_info):
print("{:3}. {:50}".format(i, info['name']))
###Output
_____no_output_____
###Markdown
Display Samples
###Code
# Load and display random samples
image_ids = np.random.choice(dataset.image_ids, 4)
for image_id in image_ids:
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset.class_names, limit=1)
# Example of loading a specific image by its source ID
source_id = "ed5be4b63e9506ad64660dd92a098ffcc0325195298c13c815a73773f1efc279"
# Map source ID to Dataset image_id
# Notice the nucleus prefix: it's the name given to the dataset in NucleusDataset
image_id = dataset.image_from_source_map["nucleus.{}".format(source_id)]
# Load and display
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("molded_image", image)
log("mask", mask)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names,
show_bbox=False)
###Output
_____no_output_____
###Markdown
Dataset StatsLoop through all images in the dataset and collect aggregate stats.
###Code
def image_stats(image_id):
"""Returns a dict of stats for one image."""
image = dataset.load_image(image_id)
mask, _ = dataset.load_mask(image_id)
bbox = utils.extract_bboxes(mask)
# Sanity check
assert mask.shape[:2] == image.shape[:2]
# Return stats dict
return {
"id": image_id,
"shape": list(image.shape),
"bbox": [[b[2] - b[0], b[3] - b[1]]
for b in bbox
# Uncomment to exclude nuclei with 1 pixel width
# or height (often on edges)
# if b[2] - b[0] > 1 and b[3] - b[1] > 1
],
"color": np.mean(image, axis=(0, 1)),
}
# Loop through the dataset and compute stats over multiple threads
# This might take a few minutes
t_start = time.time()
with concurrent.futures.ThreadPoolExecutor() as e:
stats = list(e.map(image_stats, dataset.image_ids))
t_total = time.time() - t_start
print("Total time: {:.1f} seconds".format(t_total))
###Output
_____no_output_____
###Markdown
Image Size Stats
###Code
# Image stats
image_shape = np.array([s['shape'] for s in stats])
image_color = np.array([s['color'] for s in stats])
print("Image Count: ", image_shape.shape[0])
print("Height mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 0]), np.median(image_shape[:, 0]),
np.min(image_shape[:, 0]), np.max(image_shape[:, 0])))
print("Width mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 1]), np.median(image_shape[:, 1]),
np.min(image_shape[:, 1]), np.max(image_shape[:, 1])))
print("Color mean (RGB): {:.2f} {:.2f} {:.2f}".format(*np.mean(image_color, axis=0)))
# Histograms
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
ax[0].set_title("Height")
_ = ax[0].hist(image_shape[:, 0], bins=20)
ax[1].set_title("Width")
_ = ax[1].hist(image_shape[:, 1], bins=20)
ax[2].set_title("Height & Width")
_ = ax[2].hist2d(image_shape[:, 1], image_shape[:, 0], bins=10, cmap="Blues")
###Output
_____no_output_____
###Markdown
Nuclei per Image Stats
###Code
# Segment by image area
image_area_bins = [256**2, 600**2, 1300**2]
print("Nuclei/Image")
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nuclei_per_image = np.array([len(s['bbox'])
for s in stats
if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area])
area_threshold = image_area
if len(nuclei_per_image) == 0:
print("Image area <= {:4}**2: None".format(np.sqrt(image_area)))
continue
print("Image area <= {:4.0f}**2: mean: {:.1f} median: {:.1f} min: {:.1f} max: {:.1f}".format(
np.sqrt(image_area), nuclei_per_image.mean(), np.median(nuclei_per_image),
nuclei_per_image.min(), nuclei_per_image.max()))
ax[i].set_title("Image Area <= {:4}**2".format(np.sqrt(image_area)))
_ = ax[i].hist(nuclei_per_image, bins=10)
###Output
_____no_output_____
###Markdown
Nuclei Size Stats
###Code
# Nuclei size stats
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nucleus_shape = np.array([
b
for s in stats if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area
for b in s['bbox']])
nucleus_area = nucleus_shape[:, 0] * nucleus_shape[:, 1]
area_threshold = image_area
print("\nImage Area <= {:.0f}**2".format(np.sqrt(image_area)))
print(" Total Nuclei: ", nucleus_shape.shape[0])
print(" Nucleus Height. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 0]), np.median(nucleus_shape[:, 0]),
np.min(nucleus_shape[:, 0]), np.max(nucleus_shape[:, 0])))
print(" Nucleus Width. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 1]), np.median(nucleus_shape[:, 1]),
np.min(nucleus_shape[:, 1]), np.max(nucleus_shape[:, 1])))
print(" Nucleus Area. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_area), np.median(nucleus_area),
np.min(nucleus_area), np.max(nucleus_area)))
# Show 2D histogram
_ = ax[i].hist2d(nucleus_shape[:, 1], nucleus_shape[:, 0], bins=20, cmap="Blues")
# Nuclei height/width ratio
nucleus_aspect_ratio = nucleus_shape[:, 0] / nucleus_shape[:, 1]
print("Nucleus Aspect Ratio. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_aspect_ratio), np.median(nucleus_aspect_ratio),
np.min(nucleus_aspect_ratio), np.max(nucleus_aspect_ratio)))
plt.figure(figsize=(15, 5))
_ = plt.hist(nucleus_aspect_ratio, bins=100, range=[0, 5])
###Output
_____no_output_____
###Markdown
Image AugmentationTest out different augmentation methods
###Code
# List of augmentations
# http://imgaug.readthedocs.io/en/latest/source/augmenters.html
augmentation = iaa.Sometimes(0.9, [
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
iaa.Multiply((0.8, 1.2)),
iaa.GaussianBlur(sigma=(0.0, 5.0))
])
# Load the image multiple times to show augmentations
limit = 4
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False, augment=False, augmentation=augmentation)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Image CropsMicroscoy images tend to be large, but nuclei are small. So it's more efficient to train on random crops from large images. This is handled by `config.IMAGE_RESIZE_MODE = "crop"`.
###Code
class RandomCropConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "crop"
IMAGE_MIN_DIM = 256
IMAGE_MAX_DIM = 256
crop_config = RandomCropConfig()
# Load the image multiple times to show augmentations
limit = 4
image_id = np.random.choice(dataset.image_ids, 1)[0]
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, crop_config, image_id, use_mini_mask=False)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Mini MasksInstance binary masks can get large when training with high resolution images. For example, if training with 1024x1024 image then the mask of a single instance requires 1MB of memory (Numpy uses bytes for boolean values). If an image has 100 instances then that's 100MB for the masks alone. To improve training speed, we optimize masks:* We store mask pixels that are inside the object bounding box, rather than a mask of the full image. Most objects are small compared to the image size, so we save space by not storing a lot of zeros around the object.* We resize the mask to a smaller size (e.g. 56x56). For objects that are larger than the selected size we lose a bit of accuracy. But most object annotations are not very accuracy to begin with, so this loss is negligable for most practical purposes. Thie size of the mini_mask can be set in the config class.To visualize the effect of mask resizing, and to verify the code correctness, we visualize some examples.
###Code
# Load random image and mask.
image_id = np.random.choice(dataset.image_ids, 1)[0]
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
original_shape = image.shape
# Resize
image, window, scale, padding, _ = utils.resize_image(
image,
min_dim=config.IMAGE_MIN_DIM,
max_dim=config.IMAGE_MAX_DIM,
mode=config.IMAGE_RESIZE_MODE)
mask = utils.resize_mask(mask, scale, padding)
# Compute Bounding box
bbox = utils.extract_bboxes(mask)
# Display image and additional stats
print("image_id: ", image_id, dataset.image_reference(image_id))
print("Original shape: ", original_shape)
log("image", image)
log("mask", mask)
log("class_ids", class_ids)
log("bbox", bbox)
# Display image and instances
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("image", image)
log("image_meta", image_meta)
log("class_ids", class_ids)
log("bbox", bbox)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
# Add augmentation and mask resizing.
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, augment=True, use_mini_mask=True)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
mask = utils.expand_mask(bbox, mask, image.shape)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
###Output
_____no_output_____
###Markdown
AnchorsFor an FPN network, the anchors must be ordered in a way that makes it easy to match anchors to the output of the convolution layers that predict anchor scores and shifts. * Sort by pyramid level first. All anchors of the first level, then all of the second and so on. This makes it easier to separate anchors by level.* Within each level, sort anchors by feature map processing sequence. Typically, a convolution layer processes a feature map starting from top-left and moving right row by row. * For each feature map cell, pick any sorting order for the anchors of different ratios. Here we match the order of ratios passed to the function.
###Code
## Visualize anchors of one cell at the center of the feature map
# Load and display random image
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, _, _, _ = modellib.load_image_gt(dataset, crop_config, image_id)
# Generate Anchors
backbone_shapes = modellib.compute_backbone_shapes(config, image.shape)
anchors = utils.generate_pyramid_anchors(config.RPN_ANCHOR_SCALES,
config.RPN_ANCHOR_RATIOS,
backbone_shapes,
config.BACKBONE_STRIDES,
config.RPN_ANCHOR_STRIDE)
# Print summary of anchors
num_levels = len(backbone_shapes)
anchors_per_cell = len(config.RPN_ANCHOR_RATIOS)
print("Count: ", anchors.shape[0])
print("Scales: ", config.RPN_ANCHOR_SCALES)
print("ratios: ", config.RPN_ANCHOR_RATIOS)
print("Anchors per Cell: ", anchors_per_cell)
print("Levels: ", num_levels)
anchors_per_level = []
for l in range(num_levels):
num_cells = backbone_shapes[l][0] * backbone_shapes[l][1]
anchors_per_level.append(anchors_per_cell * num_cells // config.RPN_ANCHOR_STRIDE**2)
print("Anchors in Level {}: {}".format(l, anchors_per_level[l]))
# Display
fig, ax = plt.subplots(1, figsize=(10, 10))
ax.imshow(image)
levels = len(backbone_shapes)
for level in range(levels):
colors = visualize.random_colors(levels)
# Compute the index of the anchors at the center of the image
level_start = sum(anchors_per_level[:level]) # sum of anchors of previous levels
level_anchors = anchors[level_start:level_start+anchors_per_level[level]]
print("Level {}. Anchors: {:6} Feature map Shape: {}".format(level, level_anchors.shape[0],
backbone_shapes[level]))
center_cell = backbone_shapes[level] // 2
center_cell_index = (center_cell[0] * backbone_shapes[level][1] + center_cell[1])
level_center = center_cell_index * anchors_per_cell
center_anchor = anchors_per_cell * (
(center_cell[0] * backbone_shapes[level][1] / config.RPN_ANCHOR_STRIDE**2) \
+ center_cell[1] / config.RPN_ANCHOR_STRIDE)
level_center = int(center_anchor)
# Draw anchors. Brightness show the order in the array, dark to bright.
for i, rect in enumerate(level_anchors[level_center:level_center+anchors_per_cell]):
y1, x1, y2, x2 = rect
p = patches.Rectangle((x1, y1), x2-x1, y2-y1, linewidth=2, facecolor='none',
edgecolor=(i+1)*np.array(colors[level]) / anchors_per_cell)
ax.add_patch(p)
###Output
_____no_output_____
###Markdown
Data Generator
###Code
# Create data generator
random_rois = 2000
g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=random_rois,
batch_size=4,
detection_targets=True)
# Uncomment to run the generator through a lot of images
# to catch rare errors
# for i in range(1000):
# print(i)
# _, _ = next(g)
# Get Next Image
if random_rois:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_class_ids, gt_boxes, gt_masks, rpn_rois, rois], \
[mrcnn_class_ids, mrcnn_bbox, mrcnn_mask] = next(g)
log("rois", rois)
log("mrcnn_class_ids", mrcnn_class_ids)
log("mrcnn_bbox", mrcnn_bbox)
log("mrcnn_mask", mrcnn_mask)
else:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_boxes, gt_masks], _ = next(g)
log("gt_class_ids", gt_class_ids)
log("gt_boxes", gt_boxes)
log("gt_masks", gt_masks)
log("rpn_match", rpn_match, )
log("rpn_bbox", rpn_bbox)
image_id = modellib.parse_image_meta(image_meta)["image_id"][0]
print("image_id: ", image_id, dataset.image_reference(image_id))
# Remove the last dim in mrcnn_class_ids. It's only added
# to satisfy Keras restriction on target shape.
mrcnn_class_ids = mrcnn_class_ids[:,:,0]
b = 0
# Restore original image (reverse normalization)
sample_image = modellib.unmold_image(normalized_images[b], config)
# Compute anchor shifts.
indices = np.where(rpn_match[b] == 1)[0]
refined_anchors = utils.apply_box_deltas(anchors[indices], rpn_bbox[b, :len(indices)] * config.RPN_BBOX_STD_DEV)
log("anchors", anchors)
log("refined_anchors", refined_anchors)
# Get list of positive anchors
positive_anchor_ids = np.where(rpn_match[b] == 1)[0]
print("Positive anchors: {}".format(len(positive_anchor_ids)))
negative_anchor_ids = np.where(rpn_match[b] == -1)[0]
print("Negative anchors: {}".format(len(negative_anchor_ids)))
neutral_anchor_ids = np.where(rpn_match[b] == 0)[0]
print("Neutral anchors: {}".format(len(neutral_anchor_ids)))
# ROI breakdown by class
for c, n in zip(dataset.class_names, np.bincount(mrcnn_class_ids[b].flatten())):
if n:
print("{:23}: {}".format(c[:20], n))
# Show positive anchors
fig, ax = plt.subplots(1, figsize=(16, 16))
visualize.draw_boxes(sample_image, boxes=anchors[positive_anchor_ids],
refined_boxes=refined_anchors, ax=ax)
# Show negative anchors
visualize.draw_boxes(sample_image, boxes=anchors[negative_anchor_ids])
# Show neutral anchors. They don't contribute to training.
visualize.draw_boxes(sample_image, boxes=anchors[np.random.choice(neutral_anchor_ids, 100)])
###Output
_____no_output_____
###Markdown
ROIsTypically, the RPN network generates region proposals (a.k.a. Regions of Interest, or ROIs). The data generator has the ability to generate proposals as well for illustration and testing purposes. These are controlled by the `random_rois` parameter.
###Code
if random_rois:
# Class aware bboxes
bbox_specific = mrcnn_bbox[b, np.arange(mrcnn_bbox.shape[1]), mrcnn_class_ids[b], :]
# Refined ROIs
refined_rois = utils.apply_box_deltas(rois[b].astype(np.float32), bbox_specific[:,:4] * config.BBOX_STD_DEV)
# Class aware masks
mask_specific = mrcnn_mask[b, np.arange(mrcnn_mask.shape[1]), :, :, mrcnn_class_ids[b]]
visualize.draw_rois(sample_image, rois[b], refined_rois, mask_specific, mrcnn_class_ids[b], dataset.class_names)
# Any repeated ROIs?
rows = np.ascontiguousarray(rois[b]).view(np.dtype((np.void, rois.dtype.itemsize * rois.shape[-1])))
_, idx = np.unique(rows, return_index=True)
print("Unique ROIs: {} out of {}".format(len(idx), rois.shape[1]))
if random_rois:
# Dispalay ROIs and corresponding masks and bounding boxes
ids = random.sample(range(rois.shape[1]), 8)
images = []
titles = []
for i in ids:
image = visualize.draw_box(sample_image.copy(), rois[b,i,:4].astype(np.int32), [255, 0, 0])
image = visualize.draw_box(image, refined_rois[i].astype(np.int64), [0, 255, 0])
images.append(image)
titles.append("ROI {}".format(i))
images.append(mask_specific[i] * 255)
titles.append(dataset.class_names[mrcnn_class_ids[b,i]][:20])
display_images(images, titles, cols=4, cmap="Blues", interpolation="none")
# Check ratio of positive ROIs in a set of images.
if random_rois:
limit = 10
temp_g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=10000,
batch_size=1, detection_targets=True)
total = 0
for i in range(limit):
_, [ids, _, _] = next(temp_g)
positive_rois = np.sum(ids[0] > 0)
total += positive_rois
print("{:5} {:5.2f}".format(positive_rois, positive_rois/ids.shape[1]))
print("Average percent: {:.2f}".format(total/(limit*ids.shape[1])))
###Output
_____no_output_____
###Markdown
Inspect Nucleus Training DataInspect and visualize data loading and pre-processing code.https://www.kaggle.com/c/data-science-bowl-2018
###Code
import os
import sys
import itertools
import math
import logging
import json
import re
import random
import time
import concurrent.futures
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
import imgaug
from imgaug import augmenters as iaa
# Root directory of the project
ROOT_DIR = os.getcwd()
if ROOT_DIR.endswith("samples/nucleus"):
# Go up two levels to the repo root
ROOT_DIR = os.path.dirname(os.path.dirname(ROOT_DIR))
# Import Mask RCNN
sys.path.append(ROOT_DIR)
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
from mrcnn import model as modellib
from mrcnn.model import log
import nucleus
%matplotlib inline
# Comment out to reload imported modules if they change
# %load_ext autoreload
# %autoreload 2
###Output
_____no_output_____
###Markdown
Configurations
###Code
# Dataset directory
DATASET_DIR = os.path.join(ROOT_DIR, "datasets/nucleus")
# Use configuation from nucleus.py, but override
# image resizing so we see the real sizes here
class NoResizeConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "none"
config = NoResizeConfig()
###Output
_____no_output_____
###Markdown
Notebook Preferences
###Code
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
DatasetDownload the dataset from the competition Website. Unzip it and save it in `mask_rcnn/datasets/nucleus`. If you prefer a different directory then change the `DATASET_DIR` variable above.https://www.kaggle.com/c/data-science-bowl-2018/data
###Code
# Load dataset
dataset = nucleus.NucleusDataset()
# The subset is the name of the sub-directory, such as stage1_train,
# stage1_test, ...etc. You can also use these special values:
# train: loads stage1_train but excludes validation images
# val: loads validation images from stage1_train. For a list
# of validation images see nucleus.py
dataset.load_nucleus(DATASET_DIR, subset="train")
# Must call before using the dataset
dataset.prepare()
print("Image Count: {}".format(len(dataset.image_ids)))
print("Class Count: {}".format(dataset.num_classes))
for i, info in enumerate(dataset.class_info):
print("{:3}. {:50}".format(i, info['name']))
###Output
/Users/anb32/Mask_RCNN/datasets/nucleus/stage1_train
###Markdown
Display Samples
###Code
# Load and display random samples
image_ids = np.random.choice(dataset.image_ids, 4)
for image_id in image_ids:
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset.class_names, limit=1)
# Example of loading a specific image by its source ID
source_id = "ed5be4b63e9506ad64660dd92a098ffcc0325195298c13c815a73773f1efc279"
# Map source ID to Dataset image_id
# Notice the nucleus prefix: it's the name given to the dataset in NucleusDataset
image_id = dataset.image_from_source_map["nucleus.{}".format(source_id)]
# Load and display
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("molded_image", image)
log("mask", mask)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names,
show_bbox=False)
###Output
_____no_output_____
###Markdown
Dataset StatsLoop through all images in the dataset and collect aggregate stats.
###Code
def image_stats(image_id):
"""Returns a dict of stats for one image."""
image = dataset.load_image(image_id)
mask, _ = dataset.load_mask(image_id)
bbox = utils.extract_bboxes(mask)
# Sanity check
assert mask.shape[:2] == image.shape[:2]
# Return stats dict
return {
"id": image_id,
"shape": list(image.shape),
"bbox": [[b[2] - b[0], b[3] - b[1]]
for b in bbox
# Uncomment to exclude nuclei with 1 pixel width
# or height (often on edges)
# if b[2] - b[0] > 1 and b[3] - b[1] > 1
],
"color": np.mean(image, axis=(0, 1)),
}
# Loop through the dataset and compute stats over multiple threads
# This might take a few minutes
t_start = time.time()
with concurrent.futures.ThreadPoolExecutor() as e:
stats = list(e.map(image_stats, dataset.image_ids))
t_total = time.time() - t_start
print("Total time: {:.1f} seconds".format(t_total))
###Output
_____no_output_____
###Markdown
Image Size Stats
###Code
# Image stats
image_shape = np.array([s['shape'] for s in stats])
image_color = np.array([s['color'] for s in stats])
print("Image Count: ", image_shape.shape[0])
print("Height mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 0]), np.median(image_shape[:, 0]),
np.min(image_shape[:, 0]), np.max(image_shape[:, 0])))
print("Width mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 1]), np.median(image_shape[:, 1]),
np.min(image_shape[:, 1]), np.max(image_shape[:, 1])))
print("Color mean (RGB): {:.2f} {:.2f} {:.2f}".format(*np.mean(image_color, axis=0)))
# Histograms
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
ax[0].set_title("Height")
_ = ax[0].hist(image_shape[:, 0], bins=20)
ax[1].set_title("Width")
_ = ax[1].hist(image_shape[:, 1], bins=20)
ax[2].set_title("Height & Width")
_ = ax[2].hist2d(image_shape[:, 1], image_shape[:, 0], bins=10, cmap="Blues")
###Output
_____no_output_____
###Markdown
Nuclei per Image Stats
###Code
# Segment by image area
image_area_bins = [256**2, 600**2, 1300**2]
print("Nuclei/Image")
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nuclei_per_image = np.array([len(s['bbox'])
for s in stats
if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area])
area_threshold = image_area
if len(nuclei_per_image) == 0:
print("Image area <= {:4}**2: None".format(np.sqrt(image_area)))
continue
print("Image area <= {:4.0f}**2: mean: {:.1f} median: {:.1f} min: {:.1f} max: {:.1f}".format(
np.sqrt(image_area), nuclei_per_image.mean(), np.median(nuclei_per_image),
nuclei_per_image.min(), nuclei_per_image.max()))
ax[i].set_title("Image Area <= {:4}**2".format(np.sqrt(image_area)))
_ = ax[i].hist(nuclei_per_image, bins=10)
###Output
_____no_output_____
###Markdown
Nuclei Size Stats
###Code
# Nuclei size stats
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nucleus_shape = np.array([
b
for s in stats if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area
for b in s['bbox']])
nucleus_area = nucleus_shape[:, 0] * nucleus_shape[:, 1]
area_threshold = image_area
print("\nImage Area <= {:.0f}**2".format(np.sqrt(image_area)))
print(" Total Nuclei: ", nucleus_shape.shape[0])
print(" Nucleus Height. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 0]), np.median(nucleus_shape[:, 0]),
np.min(nucleus_shape[:, 0]), np.max(nucleus_shape[:, 0])))
print(" Nucleus Width. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 1]), np.median(nucleus_shape[:, 1]),
np.min(nucleus_shape[:, 1]), np.max(nucleus_shape[:, 1])))
print(" Nucleus Area. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_area), np.median(nucleus_area),
np.min(nucleus_area), np.max(nucleus_area)))
# Show 2D histogram
_ = ax[i].hist2d(nucleus_shape[:, 1], nucleus_shape[:, 0], bins=20, cmap="Blues")
# Nuclei height/width ratio
nucleus_aspect_ratio = nucleus_shape[:, 0] / nucleus_shape[:, 1]
print("Nucleus Aspect Ratio. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_aspect_ratio), np.median(nucleus_aspect_ratio),
np.min(nucleus_aspect_ratio), np.max(nucleus_aspect_ratio)))
plt.figure(figsize=(15, 5))
_ = plt.hist(nucleus_aspect_ratio, bins=100, range=[0, 5])
###Output
_____no_output_____
###Markdown
Image AugmentationTest out different augmentation methods
###Code
# List of augmentations
# http://imgaug.readthedocs.io/en/latest/source/augmenters.html
augmentation = iaa.Sometimes(0.9, [
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
iaa.Multiply((0.8, 1.2)),
iaa.GaussianBlur(sigma=(0.0, 5.0))
])
# Load the image multiple times to show augmentations
limit = 4
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False, augment=False, augmentation=augmentation)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Image CropsMicroscoy images tend to be large, but nuclei are small. So it's more efficient to train on random crops from large images. This is handled by `config.IMAGE_RESIZE_MODE = "crop"`.
###Code
class RandomCropConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "crop"
IMAGE_MIN_DIM = 256
IMAGE_MAX_DIM = 256
crop_config = RandomCropConfig()
# Load the image multiple times to show augmentations
limit = 4
image_id = np.random.choice(dataset.image_ids, 1)[0]
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, crop_config, image_id, use_mini_mask=False)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Mini MasksInstance binary masks can get large when training with high resolution images. For example, if training with 1024x1024 image then the mask of a single instance requires 1MB of memory (Numpy uses bytes for boolean values). If an image has 100 instances then that's 100MB for the masks alone. To improve training speed, we optimize masks:* We store mask pixels that are inside the object bounding box, rather than a mask of the full image. Most objects are small compared to the image size, so we save space by not storing a lot of zeros around the object.* We resize the mask to a smaller size (e.g. 56x56). For objects that are larger than the selected size we lose a bit of accuracy. But most object annotations are not very accuracy to begin with, so this loss is negligable for most practical purposes. Thie size of the mini_mask can be set in the config class.To visualize the effect of mask resizing, and to verify the code correctness, we visualize some examples.
###Code
# Load random image and mask.
image_id = np.random.choice(dataset.image_ids, 1)[0]
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
original_shape = image.shape
# Resize
image, window, scale, padding, _ = utils.resize_image(
image,
min_dim=config.IMAGE_MIN_DIM,
max_dim=config.IMAGE_MAX_DIM,
mode=config.IMAGE_RESIZE_MODE)
mask = utils.resize_mask(mask, scale, padding)
# Compute Bounding box
bbox = utils.extract_bboxes(mask)
# Display image and additional stats
print("image_id: ", image_id, dataset.image_reference(image_id))
print("Original shape: ", original_shape)
log("image", image)
log("mask", mask)
log("class_ids", class_ids)
log("bbox", bbox)
# Display image and instances
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("image", image)
log("image_meta", image_meta)
log("class_ids", class_ids)
log("bbox", bbox)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
# Add augmentation and mask resizing.
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, augment=True, use_mini_mask=True)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
mask = utils.expand_mask(bbox, mask, image.shape)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
###Output
_____no_output_____
###Markdown
AnchorsFor an FPN network, the anchors must be ordered in a way that makes it easy to match anchors to the output of the convolution layers that predict anchor scores and shifts. * Sort by pyramid level first. All anchors of the first level, then all of the second and so on. This makes it easier to separate anchors by level.* Within each level, sort anchors by feature map processing sequence. Typically, a convolution layer processes a feature map starting from top-left and moving right row by row. * For each feature map cell, pick any sorting order for the anchors of different ratios. Here we match the order of ratios passed to the function.
###Code
## Visualize anchors of one cell at the center of the feature map
# Load and display random image
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, _, _, _ = modellib.load_image_gt(dataset, crop_config, image_id)
# Generate Anchors
backbone_shapes = modellib.compute_backbone_shapes(config, image.shape)
anchors = utils.generate_pyramid_anchors(config.RPN_ANCHOR_SCALES,
config.RPN_ANCHOR_RATIOS,
backbone_shapes,
config.BACKBONE_STRIDES,
config.RPN_ANCHOR_STRIDE)
# Print summary of anchors
num_levels = len(backbone_shapes)
anchors_per_cell = len(config.RPN_ANCHOR_RATIOS)
print("Count: ", anchors.shape[0])
print("Scales: ", config.RPN_ANCHOR_SCALES)
print("ratios: ", config.RPN_ANCHOR_RATIOS)
print("Anchors per Cell: ", anchors_per_cell)
print("Levels: ", num_levels)
anchors_per_level = []
for l in range(num_levels):
num_cells = backbone_shapes[l][0] * backbone_shapes[l][1]
anchors_per_level.append(anchors_per_cell * num_cells // config.RPN_ANCHOR_STRIDE**2)
print("Anchors in Level {}: {}".format(l, anchors_per_level[l]))
# Display
fig, ax = plt.subplots(1, figsize=(10, 10))
ax.imshow(image)
levels = len(backbone_shapes)
for level in range(levels):
colors = visualize.random_colors(levels)
# Compute the index of the anchors at the center of the image
level_start = sum(anchors_per_level[:level]) # sum of anchors of previous levels
level_anchors = anchors[level_start:level_start+anchors_per_level[level]]
print("Level {}. Anchors: {:6} Feature map Shape: {}".format(level, level_anchors.shape[0],
backbone_shapes[level]))
center_cell = backbone_shapes[level] // 2
center_cell_index = (center_cell[0] * backbone_shapes[level][1] + center_cell[1])
level_center = center_cell_index * anchors_per_cell
center_anchor = anchors_per_cell * (
(center_cell[0] * backbone_shapes[level][1] / config.RPN_ANCHOR_STRIDE**2) \
+ center_cell[1] / config.RPN_ANCHOR_STRIDE)
level_center = int(center_anchor)
# Draw anchors. Brightness show the order in the array, dark to bright.
for i, rect in enumerate(level_anchors[level_center:level_center+anchors_per_cell]):
y1, x1, y2, x2 = rect
p = patches.Rectangle((x1, y1), x2-x1, y2-y1, linewidth=2, facecolor='none',
edgecolor=(i+1)*np.array(colors[level]) / anchors_per_cell)
ax.add_patch(p)
###Output
_____no_output_____
###Markdown
Data Generator
###Code
# Create data generator
random_rois = 2000
g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=random_rois,
batch_size=4,
detection_targets=True)
# Uncomment to run the generator through a lot of images
# to catch rare errors
# for i in range(1000):
# print(i)
# _, _ = next(g)
# Get Next Image
if random_rois:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_class_ids, gt_boxes, gt_masks, rpn_rois, rois], \
[mrcnn_class_ids, mrcnn_bbox, mrcnn_mask] = next(g)
log("rois", rois)
log("mrcnn_class_ids", mrcnn_class_ids)
log("mrcnn_bbox", mrcnn_bbox)
log("mrcnn_mask", mrcnn_mask)
else:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_boxes, gt_masks], _ = next(g)
log("gt_class_ids", gt_class_ids)
log("gt_boxes", gt_boxes)
log("gt_masks", gt_masks)
log("rpn_match", rpn_match, )
log("rpn_bbox", rpn_bbox)
image_id = modellib.parse_image_meta(image_meta)["image_id"][0]
print("image_id: ", image_id, dataset.image_reference(image_id))
# Remove the last dim in mrcnn_class_ids. It's only added
# to satisfy Keras restriction on target shape.
mrcnn_class_ids = mrcnn_class_ids[:,:,0]
b = 0
# Restore original image (reverse normalization)
sample_image = modellib.unmold_image(normalized_images[b], config)
# Compute anchor shifts.
indices = np.where(rpn_match[b] == 1)[0]
refined_anchors = utils.apply_box_deltas(anchors[indices], rpn_bbox[b, :len(indices)] * config.RPN_BBOX_STD_DEV)
log("anchors", anchors)
log("refined_anchors", refined_anchors)
# Get list of positive anchors
positive_anchor_ids = np.where(rpn_match[b] == 1)[0]
print("Positive anchors: {}".format(len(positive_anchor_ids)))
negative_anchor_ids = np.where(rpn_match[b] == -1)[0]
print("Negative anchors: {}".format(len(negative_anchor_ids)))
neutral_anchor_ids = np.where(rpn_match[b] == 0)[0]
print("Neutral anchors: {}".format(len(neutral_anchor_ids)))
# ROI breakdown by class
for c, n in zip(dataset.class_names, np.bincount(mrcnn_class_ids[b].flatten())):
if n:
print("{:23}: {}".format(c[:20], n))
# Show positive anchors
fig, ax = plt.subplots(1, figsize=(16, 16))
visualize.draw_boxes(sample_image, boxes=anchors[positive_anchor_ids],
refined_boxes=refined_anchors, ax=ax)
# Show negative anchors
visualize.draw_boxes(sample_image, boxes=anchors[negative_anchor_ids])
# Show neutral anchors. They don't contribute to training.
visualize.draw_boxes(sample_image, boxes=anchors[np.random.choice(neutral_anchor_ids, 100)])
###Output
_____no_output_____
###Markdown
ROIsTypically, the RPN network generates region proposals (a.k.a. Regions of Interest, or ROIs). The data generator has the ability to generate proposals as well for illustration and testing purposes. These are controlled by the `random_rois` parameter.
###Code
if random_rois:
# Class aware bboxes
bbox_specific = mrcnn_bbox[b, np.arange(mrcnn_bbox.shape[1]), mrcnn_class_ids[b], :]
# Refined ROIs
refined_rois = utils.apply_box_deltas(rois[b].astype(np.float32), bbox_specific[:,:4] * config.BBOX_STD_DEV)
# Class aware masks
mask_specific = mrcnn_mask[b, np.arange(mrcnn_mask.shape[1]), :, :, mrcnn_class_ids[b]]
visualize.draw_rois(sample_image, rois[b], refined_rois, mask_specific, mrcnn_class_ids[b], dataset.class_names)
# Any repeated ROIs?
rows = np.ascontiguousarray(rois[b]).view(np.dtype((np.void, rois.dtype.itemsize * rois.shape[-1])))
_, idx = np.unique(rows, return_index=True)
print("Unique ROIs: {} out of {}".format(len(idx), rois.shape[1]))
if random_rois:
# Dispalay ROIs and corresponding masks and bounding boxes
ids = random.sample(range(rois.shape[1]), 8)
images = []
titles = []
for i in ids:
image = visualize.draw_box(sample_image.copy(), rois[b,i,:4].astype(np.int32), [255, 0, 0])
image = visualize.draw_box(image, refined_rois[i].astype(np.int64), [0, 255, 0])
images.append(image)
titles.append("ROI {}".format(i))
images.append(mask_specific[i] * 255)
titles.append(dataset.class_names[mrcnn_class_ids[b,i]][:20])
display_images(images, titles, cols=4, cmap="Blues", interpolation="none")
# Check ratio of positive ROIs in a set of images.
if random_rois:
limit = 10
temp_g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=10000,
batch_size=1, detection_targets=True)
total = 0
for i in range(limit):
_, [ids, _, _] = next(temp_g)
positive_rois = np.sum(ids[0] > 0)
total += positive_rois
print("{:5} {:5.2f}".format(positive_rois, positive_rois/ids.shape[1]))
print("Average percent: {:.2f}".format(total/(limit*ids.shape[1])))
###Output
_____no_output_____
###Markdown
Inspect Nucleus Training DataInspect and visualize data loading and pre-processing code.https://www.kaggle.com/c/data-science-bowl-2018
###Code
import os
import sys
import itertools
import math
import logging
import json
import re
import random
import time
import concurrent.futures
import numpy as np
import matplotlib
import matplotlib.pyplot as plt
import matplotlib.patches as patches
import matplotlib.lines as lines
from matplotlib.patches import Polygon
import imgaug
from imgaug import augmenters as iaa
# Root directory of the project
ROOT_DIR = os.getcwd()
print("ROOT_DIR",ROOT_DIR)
if ROOT_DIR.endswith("nucleus"):
# Go up two levels to the repo root
ROOT_DIR = os.path.dirname(os.path.dirname(ROOT_DIR))
print("ROOT_DIR",ROOT_DIR)
# Import Mask RCNN
sys.path.append(ROOT_DIR)
from mrcnn import utils
from mrcnn import visualize
from mrcnn.visualize import display_images
from mrcnn import model as modellib
from mrcnn.model import log
import nucleus
%matplotlib inline
# Comment out to reload imported modules if they change
# %load_ext autoreload
# %autoreload 2
###Output
_____no_output_____
###Markdown
Configurations
###Code
# Dataset directory
DATASET_DIR = os.path.join(ROOT_DIR, "datasets/nucleus")
# Use configuation from nucleus.py, but override
# image resizing so we see the real sizes here
class NoResizeConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "none"
config = NoResizeConfig()
###Output
_____no_output_____
###Markdown
Notebook Preferences
###Code
def get_ax(rows=1, cols=1, size=16):
"""Return a Matplotlib Axes array to be used in
all visualizations in the notebook. Provide a
central point to control graph sizes.
Adjust the size attribute to control how big to render images
"""
_, ax = plt.subplots(rows, cols, figsize=(size*cols, size*rows))
return ax
###Output
_____no_output_____
###Markdown
DatasetDownload the dataset from the competition Website. Unzip it and save it in `mask_rcnn/datasets/nucleus`. If you prefer a different directory then change the `DATASET_DIR` variable above.https://www.kaggle.com/c/data-science-bowl-2018/data
###Code
# Load dataset
dataset = nucleus.NucleusDataset()
# The subset is the name of the sub-directory, such as stage1_train,
# stage1_test, ...etc. You can also use these special values:
# train: loads stage1_train but excludes validation images
# val: loads validation images from stage1_train. For a list
# of validation images see nucleus.py
dataset.load_nucleus(DATASET_DIR, subset="train")
# Must call before using the dataset
dataset.prepare()
print("Image Count: {}".format(len(dataset.image_ids)))
print("Class Count: {}".format(dataset.num_classes))
for i, info in enumerate(dataset.class_info):
print("{:3}. {:50}".format(i, info['name']))
###Output
_____no_output_____
###Markdown
Display Samples
###Code
# Load and display random samples
image_ids = np.random.choice(dataset.image_ids, 4)
for image_id in image_ids:
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
visualize.display_top_masks(image, mask, class_ids, dataset.class_names, limit=1)
# Example of loading a specific image by its source ID
source_id = "ed5be4b63e9506ad64660dd92a098ffcc0325195298c13c815a73773f1efc279"
# Map source ID to Dataset image_id
# Notice the nucleus prefix: it's the name given to the dataset in NucleusDataset
image_id = dataset.image_from_source_map["nucleus.{}".format(source_id)]
# Load and display
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("molded_image", image)
log("mask", mask)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names,
show_bbox=False)
###Output
_____no_output_____
###Markdown
Dataset StatsLoop through all images in the dataset and collect aggregate stats.
###Code
def image_stats(image_id):
"""Returns a dict of stats for one image."""
image = dataset.load_image(image_id)
mask, _ = dataset.load_mask(image_id)
bbox = utils.extract_bboxes(mask)
# Sanity check
assert mask.shape[:2] == image.shape[:2]
# Return stats dict
return {
"id": image_id,
"shape": list(image.shape),
"bbox": [[b[2] - b[0], b[3] - b[1]]
for b in bbox
# Uncomment to exclude nuclei with 1 pixel width
# or height (often on edges)
# if b[2] - b[0] > 1 and b[3] - b[1] > 1
],
"color": np.mean(image, axis=(0, 1)),
}
# Loop through the dataset and compute stats over multiple threads
# This might take a few minutes
t_start = time.time()
with concurrent.futures.ThreadPoolExecutor() as e:
stats = list(e.map(image_stats, dataset.image_ids))
t_total = time.time() - t_start
print("Total time: {:.1f} seconds".format(t_total))
###Output
_____no_output_____
###Markdown
Image Size Stats
###Code
# Image stats
image_shape = np.array([s['shape'] for s in stats])
image_color = np.array([s['color'] for s in stats])
print("Image Count: ", image_shape.shape[0])
print("Height mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 0]), np.median(image_shape[:, 0]),
np.min(image_shape[:, 0]), np.max(image_shape[:, 0])))
print("Width mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(image_shape[:, 1]), np.median(image_shape[:, 1]),
np.min(image_shape[:, 1]), np.max(image_shape[:, 1])))
print("Color mean (RGB): {:.2f} {:.2f} {:.2f}".format(*np.mean(image_color, axis=0)))
# Histograms
fig, ax = plt.subplots(1, 3, figsize=(16, 4))
ax[0].set_title("Height")
_ = ax[0].hist(image_shape[:, 0], bins=20)
ax[1].set_title("Width")
_ = ax[1].hist(image_shape[:, 1], bins=20)
ax[2].set_title("Height & Width")
_ = ax[2].hist2d(image_shape[:, 1], image_shape[:, 0], bins=10, cmap="Blues")
###Output
_____no_output_____
###Markdown
Nuclei per Image Stats
###Code
# Segment by image area
image_area_bins = [256**2, 600**2, 1300**2]
print("Nuclei/Image")
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nuclei_per_image = np.array([len(s['bbox'])
for s in stats
if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area])
area_threshold = image_area
if len(nuclei_per_image) == 0:
print("Image area <= {:4}**2: None".format(np.sqrt(image_area)))
continue
print("Image area <= {:4.0f}**2: mean: {:.1f} median: {:.1f} min: {:.1f} max: {:.1f}".format(
np.sqrt(image_area), nuclei_per_image.mean(), np.median(nuclei_per_image),
nuclei_per_image.min(), nuclei_per_image.max()))
ax[i].set_title("Image Area <= {:4}**2".format(np.sqrt(image_area)))
_ = ax[i].hist(nuclei_per_image, bins=10)
###Output
_____no_output_____
###Markdown
Nuclei Size Stats
###Code
# Nuclei size stats
fig, ax = plt.subplots(1, len(image_area_bins), figsize=(16, 4))
area_threshold = 0
for i, image_area in enumerate(image_area_bins):
nucleus_shape = np.array([
b
for s in stats if area_threshold < (s['shape'][0] * s['shape'][1]) <= image_area
for b in s['bbox']])
nucleus_area = nucleus_shape[:, 0] * nucleus_shape[:, 1]
area_threshold = image_area
print("\nImage Area <= {:.0f}**2".format(np.sqrt(image_area)))
print(" Total Nuclei: ", nucleus_shape.shape[0])
print(" Nucleus Height. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 0]), np.median(nucleus_shape[:, 0]),
np.min(nucleus_shape[:, 0]), np.max(nucleus_shape[:, 0])))
print(" Nucleus Width. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_shape[:, 1]), np.median(nucleus_shape[:, 1]),
np.min(nucleus_shape[:, 1]), np.max(nucleus_shape[:, 1])))
print(" Nucleus Area. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_area), np.median(nucleus_area),
np.min(nucleus_area), np.max(nucleus_area)))
# Show 2D histogram
_ = ax[i].hist2d(nucleus_shape[:, 1], nucleus_shape[:, 0], bins=20, cmap="Blues")
# Nuclei height/width ratio
nucleus_aspect_ratio = nucleus_shape[:, 0] / nucleus_shape[:, 1]
print("Nucleus Aspect Ratio. mean: {:.2f} median: {:.2f} min: {:.2f} max: {:.2f}".format(
np.mean(nucleus_aspect_ratio), np.median(nucleus_aspect_ratio),
np.min(nucleus_aspect_ratio), np.max(nucleus_aspect_ratio)))
plt.figure(figsize=(15, 5))
_ = plt.hist(nucleus_aspect_ratio, bins=100, range=[0, 5])
###Output
_____no_output_____
###Markdown
Image AugmentationTest out different augmentation methods
###Code
# List of augmentations
# http://imgaug.readthedocs.io/en/latest/source/augmenters.html
augmentation = iaa.Sometimes(0.9, [
iaa.Fliplr(0.5),
iaa.Flipud(0.5),
iaa.Multiply((0.8, 1.2)),
iaa.GaussianBlur(sigma=(0.0, 5.0))
])
# Load the image multiple times to show augmentations
limit = 4
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False, augment=False, augmentation=augmentation)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Image CropsMicroscoy images tend to be large, but nuclei are small. So it's more efficient to train on random crops from large images. This is handled by `config.IMAGE_RESIZE_MODE = "crop"`.
###Code
class RandomCropConfig(nucleus.NucleusConfig):
IMAGE_RESIZE_MODE = "crop"
IMAGE_MIN_DIM = 256
IMAGE_MAX_DIM = 256
crop_config = RandomCropConfig()
# Load the image multiple times to show augmentations
limit = 4
image_id = np.random.choice(dataset.image_ids, 1)[0]
ax = get_ax(rows=2, cols=limit//2)
for i in range(limit):
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, crop_config, image_id, use_mini_mask=False)
visualize.display_instances(image, bbox, mask, class_ids,
dataset.class_names, ax=ax[i//2, i % 2],
show_mask=False, show_bbox=False)
###Output
_____no_output_____
###Markdown
Mini MasksInstance binary masks can get large when training with high resolution images. For example, if training with 1024x1024 image then the mask of a single instance requires 1MB of memory (Numpy uses bytes for boolean values). If an image has 100 instances then that's 100MB for the masks alone. To improve training speed, we optimize masks:* We store mask pixels that are inside the object bounding box, rather than a mask of the full image. Most objects are small compared to the image size, so we save space by not storing a lot of zeros around the object.* We resize the mask to a smaller size (e.g. 56x56). For objects that are larger than the selected size we lose a bit of accuracy. But most object annotations are not very accuracy to begin with, so this loss is negligable for most practical purposes. Thie size of the mini_mask can be set in the config class.To visualize the effect of mask resizing, and to verify the code correctness, we visualize some examples.
###Code
# Load random image and mask.
image_id = np.random.choice(dataset.image_ids, 1)[0]
image = dataset.load_image(image_id)
mask, class_ids = dataset.load_mask(image_id)
original_shape = image.shape
# Resize
image, window, scale, padding, _ = utils.resize_image(
image,
min_dim=config.IMAGE_MIN_DIM,
max_dim=config.IMAGE_MAX_DIM,
mode=config.IMAGE_RESIZE_MODE)
mask = utils.resize_mask(mask, scale, padding)
# Compute Bounding box
bbox = utils.extract_bboxes(mask)
# Display image and additional stats
print("image_id: ", image_id, dataset.image_reference(image_id))
print("Original shape: ", original_shape)
log("image", image)
log("mask", mask)
log("class_ids", class_ids)
log("bbox", bbox)
# Display image and instances
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, use_mini_mask=False)
log("image", image)
log("image_meta", image_meta)
log("class_ids", class_ids)
log("bbox", bbox)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
# Add augmentation and mask resizing.
image, image_meta, class_ids, bbox, mask = modellib.load_image_gt(
dataset, config, image_id, augment=True, use_mini_mask=True)
log("mask", mask)
display_images([image]+[mask[:,:,i] for i in range(min(mask.shape[-1], 7))])
mask = utils.expand_mask(bbox, mask, image.shape)
visualize.display_instances(image, bbox, mask, class_ids, dataset.class_names)
###Output
_____no_output_____
###Markdown
AnchorsFor an FPN network, the anchors must be ordered in a way that makes it easy to match anchors to the output of the convolution layers that predict anchor scores and shifts. * Sort by pyramid level first. All anchors of the first level, then all of the second and so on. This makes it easier to separate anchors by level.* Within each level, sort anchors by feature map processing sequence. Typically, a convolution layer processes a feature map starting from top-left and moving right row by row. * For each feature map cell, pick any sorting order for the anchors of different ratios. Here we match the order of ratios passed to the function.
###Code
## Visualize anchors of one cell at the center of the feature map
# Load and display random image
image_id = np.random.choice(dataset.image_ids, 1)[0]
image, image_meta, _, _, _ = modellib.load_image_gt(dataset, crop_config, image_id)
# Generate Anchors
backbone_shapes = modellib.compute_backbone_shapes(config, image.shape)
anchors = utils.generate_pyramid_anchors(config.RPN_ANCHOR_SCALES,
config.RPN_ANCHOR_RATIOS,
backbone_shapes,
config.BACKBONE_STRIDES,
config.RPN_ANCHOR_STRIDE)
# Print summary of anchors
num_levels = len(backbone_shapes)
anchors_per_cell = len(config.RPN_ANCHOR_RATIOS)
print("Count: ", anchors.shape[0])
print("Scales: ", config.RPN_ANCHOR_SCALES)
print("ratios: ", config.RPN_ANCHOR_RATIOS)
print("Anchors per Cell: ", anchors_per_cell)
print("Levels: ", num_levels)
anchors_per_level = []
for l in range(num_levels):
num_cells = backbone_shapes[l][0] * backbone_shapes[l][1]
anchors_per_level.append(anchors_per_cell * num_cells // config.RPN_ANCHOR_STRIDE**2)
print("Anchors in Level {}: {}".format(l, anchors_per_level[l]))
# Display
fig, ax = plt.subplots(1, figsize=(10, 10))
ax.imshow(image)
levels = len(backbone_shapes)
for level in range(levels):
colors = visualize.random_colors(levels)
# Compute the index of the anchors at the center of the image
level_start = sum(anchors_per_level[:level]) # sum of anchors of previous levels
level_anchors = anchors[level_start:level_start+anchors_per_level[level]]
print("Level {}. Anchors: {:6} Feature map Shape: {}".format(level, level_anchors.shape[0],
backbone_shapes[level]))
center_cell = backbone_shapes[level] // 2
center_cell_index = (center_cell[0] * backbone_shapes[level][1] + center_cell[1])
level_center = center_cell_index * anchors_per_cell
center_anchor = anchors_per_cell * (
(center_cell[0] * backbone_shapes[level][1] / config.RPN_ANCHOR_STRIDE**2) \
+ center_cell[1] / config.RPN_ANCHOR_STRIDE)
level_center = int(center_anchor)
# Draw anchors. Brightness show the order in the array, dark to bright.
for i, rect in enumerate(level_anchors[level_center:level_center+anchors_per_cell]):
y1, x1, y2, x2 = rect
p = patches.Rectangle((x1, y1), x2-x1, y2-y1, linewidth=2, facecolor='none',
edgecolor=(i+1)*np.array(colors[level]) / anchors_per_cell)
ax.add_patch(p)
###Output
_____no_output_____
###Markdown
Data Generator
###Code
# Create data generator
random_rois = 2000
g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=random_rois,
batch_size=4,
detection_targets=True)
# Uncomment to run the generator through a lot of images
# to catch rare errors
# for i in range(1000):
# print(i)
# _, _ = next(g)
# Get Next Image
if random_rois:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_class_ids, gt_boxes, gt_masks, rpn_rois, rois], \
[mrcnn_class_ids, mrcnn_bbox, mrcnn_mask] = next(g)
log("rois", rois)
log("mrcnn_class_ids", mrcnn_class_ids)
log("mrcnn_bbox", mrcnn_bbox)
log("mrcnn_mask", mrcnn_mask)
else:
[normalized_images, image_meta, rpn_match, rpn_bbox, gt_boxes, gt_masks], _ = next(g)
log("gt_class_ids", gt_class_ids)
log("gt_boxes", gt_boxes)
log("gt_masks", gt_masks)
log("rpn_match", rpn_match, )
log("rpn_bbox", rpn_bbox)
image_id = modellib.parse_image_meta(image_meta)["image_id"][0]
print("image_id: ", image_id, dataset.image_reference(image_id))
# Remove the last dim in mrcnn_class_ids. It's only added
# to satisfy Keras restriction on target shape.
mrcnn_class_ids = mrcnn_class_ids[:,:,0]
b = 0
# Restore original image (reverse normalization)
sample_image = modellib.unmold_image(normalized_images[b], config)
# Compute anchor shifts.
indices = np.where(rpn_match[b] == 1)[0]
refined_anchors = utils.apply_box_deltas(anchors[indices], rpn_bbox[b, :len(indices)] * config.RPN_BBOX_STD_DEV)
log("anchors", anchors)
log("refined_anchors", refined_anchors)
# Get list of positive anchors
positive_anchor_ids = np.where(rpn_match[b] == 1)[0]
print("Positive anchors: {}".format(len(positive_anchor_ids)))
negative_anchor_ids = np.where(rpn_match[b] == -1)[0]
print("Negative anchors: {}".format(len(negative_anchor_ids)))
neutral_anchor_ids = np.where(rpn_match[b] == 0)[0]
print("Neutral anchors: {}".format(len(neutral_anchor_ids)))
# ROI breakdown by class
for c, n in zip(dataset.class_names, np.bincount(mrcnn_class_ids[b].flatten())):
if n:
print("{:23}: {}".format(c[:20], n))
# Show positive anchors
fig, ax = plt.subplots(1, figsize=(16, 16))
visualize.draw_boxes(sample_image, boxes=anchors[positive_anchor_ids],
refined_boxes=refined_anchors, ax=ax)
# Show negative anchors
visualize.draw_boxes(sample_image, boxes=anchors[negative_anchor_ids])
# Show neutral anchors. They don't contribute to training.
visualize.draw_boxes(sample_image, boxes=anchors[np.random.choice(neutral_anchor_ids, 100)])
###Output
_____no_output_____
###Markdown
ROIsTypically, the RPN network generates region proposals (a.k.a. Regions of Interest, or ROIs). The data generator has the ability to generate proposals as well for illustration and testing purposes. These are controlled by the `random_rois` parameter.
###Code
if random_rois:
# Class aware bboxes
bbox_specific = mrcnn_bbox[b, np.arange(mrcnn_bbox.shape[1]), mrcnn_class_ids[b], :]
# Refined ROIs
refined_rois = utils.apply_box_deltas(rois[b].astype(np.float32), bbox_specific[:,:4] * config.BBOX_STD_DEV)
# Class aware masks
mask_specific = mrcnn_mask[b, np.arange(mrcnn_mask.shape[1]), :, :, mrcnn_class_ids[b]]
visualize.draw_rois(sample_image, rois[b], refined_rois, mask_specific, mrcnn_class_ids[b], dataset.class_names)
# Any repeated ROIs?
rows = np.ascontiguousarray(rois[b]).view(np.dtype((np.void, rois.dtype.itemsize * rois.shape[-1])))
_, idx = np.unique(rows, return_index=True)
print("Unique ROIs: {} out of {}".format(len(idx), rois.shape[1]))
if random_rois:
# Dispalay ROIs and corresponding masks and bounding boxes
ids = random.sample(range(rois.shape[1]), 8)
images = []
titles = []
for i in ids:
image = visualize.draw_box(sample_image.copy(), rois[b,i,:4].astype(np.int32), [255, 0, 0])
image = visualize.draw_box(image, refined_rois[i].astype(np.int64), [0, 255, 0])
images.append(image)
titles.append("ROI {}".format(i))
images.append(mask_specific[i] * 255)
titles.append(dataset.class_names[mrcnn_class_ids[b,i]][:20])
display_images(images, titles, cols=4, cmap="Blues", interpolation="none")
# Check ratio of positive ROIs in a set of images.
if random_rois:
limit = 10
temp_g = modellib.data_generator(
dataset, crop_config, shuffle=True, random_rois=10000,
batch_size=1, detection_targets=True)
total = 0
for i in range(limit):
_, [ids, _, _] = next(temp_g)
positive_rois = np.sum(ids[0] > 0)
total += positive_rois
print("{:5} {:5.2f}".format(positive_rois, positive_rois/ids.shape[1]))
print("Average percent: {:.2f}".format(total/(limit*ids.shape[1])))
###Output
_____no_output_____ |
PMFG_diagnostics.ipynb | ###Markdown
PMFG -- testing runtime and convergence
###Code
import numpy as np
import pandas as pd
import networkx # as nx
from time import time
import timeit
#%matplotlib inline
import matplotlib.pyplot as plt
raw_asset_prices_df = pd.read_csv("IVV_historical.csv", index_col='Date')
log_returns_df = np.log(raw_asset_prices_df).diff().dropna()
# drop first row of raw prices so it has the same dimensions as the log-returns DF
raw_asset_prices_df = raw_asset_prices_df.iloc[1:]
stock_names = log_returns_df.columns
df_shape = (raw_asset_prices_df.shape)
print(f"There are {df_shape[0]} rows and {df_shape[1]} columns in the dataset.")
print(f"Data timeperiod covers: {raw_asset_prices_df.index[0]} to {raw_asset_prices_df.index[-1]}")
raw_corr = log_returns_df.corr()
shr_coef = 1e-4
#shr_target=np.ones((df_shape[1], df_shape[1]))
shr_target=np.eye(df_shape[1])
correlation_matrix = raw_corr*(1-shr_coef) + shr_target*shr_coef
print('Condition number of sample correlation matrix: %.2e' %np.linalg.cond(raw_corr))
print('Condition number of shrunk correlation matrix: %.2e' %np.linalg.cond(correlation_matrix))
G0 = networkx.from_pandas_adjacency(correlation_matrix-np.diag(np.diag(correlation_matrix)))
print(networkx.info(G0))
###Output
Condition number of sample correlation matrix: 1.13e+19
Condition number of shrunk correlation matrix: 1.49e+06
Name:
Type: Graph
Number of nodes: 504
Number of edges: 126756
Average degree: 503.0000
###Markdown
Diagnostic version of PMFG.pyTemporary version of PMFG algorithm for debugging, as well as inspecting the convergence process.
###Code
from typing import List
import planarity
class edge():
"""
Create an edge from `src` to `dst` with weight `wt`
@params
src: source node
dst: destination node
wt: weight
"""
def __init__(self, src, dst, wt):
self.src = src
self.dst = dst
self.wt = wt
class PMFG():
def __init__(self, graph: networkx.Graph, planarity_check_lib: str="default", verbose: int=0, tol_ratio: float=0.):
self.origin_graph = graph
self.sort_edges = None
self.pmfg_graph = None
self.planarity_check_lib = planarity_check_lib
self.verbose = verbose
self.tol_ratio = tol_ratio
def sort_edge(self) -> List[edge]:
sort_edges = []
for src, dst, data in sorted(self.origin_graph.edges(data=True), key=lambda x: x[2]["weight"], reverse=True):
sort_edges.append(edge(src, dst, data["weight"]))
self.sort_edges = sort_edges
return sort_edges
def compute(self) -> networkx.Graph:
if self.sort_edges == None:
self.sort_edge()
number_of_nodes = self.origin_graph.number_of_nodes()
pmfg_graph = networkx.Graph()
loop_counter = 0
cum_pct = []
timestamp = time()
for edge in self.sort_edges:
loop_counter += 1
# Adding edge and check the planarity
pmfg_graph.add_edge(edge.src, edge.dst, weight=edge.wt)
# If the graph is not planar, then remove the edge
if not self.is_planar(pmfg_graph, self.planarity_check_lib):
pmfg_graph.remove_edge(edge.src, edge.dst)
cum_pct.append(pmfg_graph.number_of_edges()/(3 * (number_of_nodes - 2)))
if self.verbose == 1:
print(f"Number of edges added = {pmfg_graph.number_of_edges()}, Number of edges to be added = {3 * (number_of_nodes - 2) - pmfg_graph.number_of_edges()}")
if self.verbose == 2 and (loop_counter%1000 == 0):
print(f"Number of edges to be added = {3 * (number_of_nodes - 2) - pmfg_graph.number_of_edges()}, time taken = {time() - timestamp}")
timestamp = time()
if pmfg_graph.number_of_edges() >= 3 * (number_of_nodes - 2) * (1-self.tol_ratio):
break
self.pmfg_graph = pmfg_graph
return pmfg_graph, np.array(cum_pct)
@staticmethod
def is_planar(graph: networkx.Graph, planarity_check_lib: str="default") -> bool:
if planarity_check_lib == "networkx":
return networkx.algorithms.planarity.check_planarity(graph)[0]
return planarity.is_planar(graph)
#timestamp = time()
G0_filtered, cum_pct = PMFG(G0, verbose=2).compute()
#print('Time taken to construct PMFG graph: %.2f s\n' %(time()-timestamp))
print(networkx.info(G0_filtered))
for i in range(95, 100):
print(i,'pct: argmin' , np.min(np.where(cum_pct>=i*.01)))
plt.figure(figsize=(12,6))
plt.axhline(.95, linestyle='--', c='r')
plt.plot(cum_pct);
tstamp=time()
G0_filtered, _ = PMFG(G0, verbose=0, tol_ratio=.03).compute()
print('Time taken: %.2f s' %(time()-tstamp))
print(networkx.info(G0_filtered))
###Output
Name:
Type: Graph
Number of nodes: 503
Number of edges: 1461
Average degree: 5.8091
###Markdown
FindingsIt takes progressively longer to find the successive edge to be added to PMFG. The last few edges take more than twice the time than to find the rest (total ~120s, $97\%$ of the edges are found at ~40s and $40\%$ iteration). We are thinking in terms of running this algorithm on years worth of daily data. If we take the cutoff at 97% (so a planar-97%-filtered-graph instead of p-M-f-g), we cut runtime down to one-third.Still, 40s ($=\epsilon$ for the other graph operations that we have to perform) per run mean a year's worth of business data would take ~3 hours to process. Relatively better and more manageable, but we can do better.Proposal: use graph sampling method to compute centrality features of a random subgraph og the correlation network; do this multiple times to get some kind of sampled/ensembled centrality feature that hopefully captures most of the underlying structure of the corr. network, but runs way faster. (TBD) Centrality fFeature AnalysisRunning the same plots to check that the distribution of centrality features of our approx. PMFG are in-line with the actual PMFG.
###Code
import networkx as nx
G1 = nx.Graph()
weight_map = lambda w: 1+w
for u,v,d in G0_filtered.edges(data=True):
G1.add_edge(u,v,weight=weight_map(d['weight']))
print(nx.info(G1))
deg= pd.DataFrame.from_dict(dict(G1.degree(weight='weight')), orient='index', columns = ['D'])
EC = pd.DataFrame.from_dict(nx.eigenvector_centrality(G1), orient='index', columns = ['EC'])
PG = pd.DataFrame.from_dict(nx.pagerank(G1), orient='index', columns = ['PG'])
G1 = nx.Graph()
weight_map = lambda w: np.sqrt(2*(1-w))
for u,v,d in G0_filtered.edges(data=True):
G1.add_edge(u,v,weight=weight_map(d['weight']))
print(nx.info(G1))
ecc= pd.DataFrame.from_dict(nx.eccentricity(G1), orient='index', columns = ['E'])
clo= pd.DataFrame.from_dict(nx.closeness_centrality(G1), orient='index', columns = ['C'])
BC = pd.DataFrame.from_dict(nx.betweenness_centrality(G1), orient='index', columns = ['BC'])
#centralities_names = ['BC', 'C', 'D', 'E', 'EC']
#centralities_names = ['D', 'BC', 'E', 'C', 'EC']
centralities_names = ['D', 'BC', 'nE', 'C', 'EC', 'PG']
centralities = deg.copy()
centralities['BC'] = BC
centralities['nE'] = -ecc
centralities['C'] = clo
centralities['EC'] = EC
centralities['PG'] = PG
print(centralities.head())
centralities.corr()
import seaborn as sns
sns.clustermap(centralities.corr(), cmap="RdYlGn", center=0.)
plt.show()
corr_plot = sns.pairplot(data=centralities);
#corr_plot.map_lower(sns.kdeplot, levels=4, color=".2");
print(BC.idxmax())
BC.hist();
###Output
BC EMR
dtype: object
|
9-pyecharts_tutorial.ipynb | ###Markdown
简介在这个教程中,我们将介绍pyecharts包。 Echarts是百度开发的一个javascript可视化脚本,他的特点在于可交互(就是你鼠标移动上去,图片可以作出相应的变化)。小旭学长在echarts gallery发布了多款图表,累计12万的浏览量,获得300+赞,可以点击[这个链接](https://gallery.echartsjs.com/explore.html?u=bd-167860219&type=worksort=rank~timeframe=all~author=all)查看 echarts属于门槛比较高的一种可视化方法,需要写javascript,而javascript又是基于html网页,另外数据的处理还要用python,等于说你要熟练使用得同时会js,html,python三种语言。现在pyecharts出现了,直接在python里就可以调用生成echarts图片,大部分的图表只需要用python就可以生成,非常方便! 相比我们之前介绍的几种画图方法,各有各的优势,小旭学长的总结如下: >matplotlib:纯python出图,可批量出图,缺点是出的图片为静态图片无法交互 folium:主要功能是绘制地图,javascript出图可交互,坐标系为wgs84,数据不需要转坐标 echarts:可绘制各种图表,也能绘制地图,javascript出图可交互,但绘制地图时底图一般采用百度地图,需要转坐标系那么,我们赶紧开始学习python的pyecharts包吧! 最近新冠肺炎的疫情是相当的不妙。 小旭学长观察到,一些大型网站的疫情发布地图都是基于echarts的,那么我们基于pyecharts来实现一下数据地图可视化的操作吧 提供的基础数据是:提供的基础数据是: 数据: 不提供,我们数据从网上抓,无中生有 数据获取 很多网站都提供了疫情分布情况,数据都是公开的我们直接抓就行。这里以腾讯新闻的疫情发布链接为例,观察网络链接可以找到数据获取的访问请求 好的我们用最简单的方式把它抓下来,我们做省份的可视化
###Code
import urllib
import json
url = 'https://view.inews.qq.com/g2/getOnsInfo?name=disease_h5'
request = urllib.request.Request(url)
response = urllib.request.urlopen(request)
datajson=json.loads(response.read().decode('utf8'))
datajson=json.loads(datajson['data'])
#数据就存放在这个变量里
datajson
#提取各省份的数据
import pandas as pd
provincedata = pd.DataFrame(datajson['areaTree'][0]['children'])
provincedata.head(5)
#整理一下数据,把total里面的数据展开
data1 = pd.DataFrame(list(provincedata['total']))
data1['name'] = provincedata['name']
data1.head(5)
###Output
_____no_output_____
###Markdown
好的,到这一步我们已经获取了数据 可视化 全国数据可视化 官方配置文档:[pyecharts的地理图表教程](https://pyecharts.org//zh-cn/geography_charts) 首先我们要把数据整理成echarts认识的格式,就是如下:
###Code
data1[['name','confirm']].values
from pyecharts import options as opts
from pyecharts.charts import Map
#创建echarts对象c
c = (
Map()#告诉echarts这个是Map形式的图表
.add("确诊", data1[['name','confirm']].values, "china")#加一个数据,这个数据名叫”确诊“,数据的地图是echarts自带的china
.set_global_opts(#对图表添加设置
title_opts=opts.TitleOpts(title='疫情地图')#设置图表的名称
)
)
#导出为html文件
c.render('疫情地图.html')
###Output
_____no_output_____
###Markdown
打开目录下的 [疫情地图.html](疫情地图.html) 文件,效果如下 但是,我们还要在加一些美化调整,一些参数可以看[echarts官方的配置项手册](https://www.echartsjs.com/zh/option.htmltitle)
###Code
from pyecharts import options as opts
from pyecharts.charts import Map
c = (
Map()
.add("确诊",
data1[['name','confirm']].values,
"china",
is_roam = False,#不可鼠标缩放和平移漫游
zoom = 1.2,#当前视角的缩放比例
is_map_symbol_show = False, # 是否显示标记图形
label_opts = opts.LabelOpts(position = 'inside'),#标签尽量放在图形区域内
)
.set_global_opts(
title_opts=opts.TitleOpts(title='疫情地图'),
visualmap_opts=opts.VisualMapOpts(is_piecewise=True,#设定分段颜色显示
pieces=[{'min': 10000,'label':'10000人以上'}, #设定分段的值
{'min': 1000, 'max': 9999,'label':'1000-9999人'},
{'min': 500, 'max': 999,'label':'500-999人'},
{'min': 100, 'max': 499,'label':'100-499人'},
{'min': 10, 'max': 99,'label':'10-99人'},
{'min': 1, 'max': 9,'label':'1-9人'}],
range_color=["#b4e0f3","#70b4eb","#1482e5","#1c3fbf","#070093" ] #调整显示颜色
),
)
)
c.render('疫情地图.html')
###Output
_____no_output_____
###Markdown
打开目录下的 [疫情地图.html](疫情地图.html) 文件,效果如下 单个省份数据可视化
###Code
import pandas as pd
#提取省份的数据
province = '广东'
guangdongdata = pd.DataFrame(provincedata[provincedata['name'] == province]['children'].iloc[0])
#整理一下数据,把total里面的数据展开
data2 = pd.DataFrame(list(guangdongdata['total']))
data2['name'] = guangdongdata['name']+'市'
data2.head(5)
from pyecharts import options as opts
from pyecharts.charts import Map
c = (
Map()
.add("确诊",
data2[['name','confirm']].values,
province,
is_roam = False,#不可鼠标缩放和平移漫游
zoom = 1.2,#当前视角的缩放比例
is_map_symbol_show = False, # 是否显示标记图形
label_opts = opts.LabelOpts(position = 'inside'),#标签尽量放在图形区域内
)
.set_global_opts(
title_opts=opts.TitleOpts(title=province+'疫情地图'),
visualmap_opts=opts.VisualMapOpts(is_piecewise=True,#设定分段颜色显示
pieces=[{'min': 200, 'label':'200人以上'},#设定分段的值
{'min': 100, 'max': 199,'label':'100-199人'},
{'min': 50, 'max': 99,'label':'50-99人'},
{'min': 10, 'max': 49,'label':'10-49人'},
{'min': 1, 'max': 9,'label':'1-9人'}],
range_color=["#b4e0f3","#70b4eb","#1482e5","#1c3fbf","#070093" ] #调整显示颜色
),
)
)
c.render(province+'疫情地图.html')
###Output
_____no_output_____ |
data/preprocessing/.ipynb_checkpoints/TCGA-PANCAN-checkpoint.ipynb | ###Markdown
TCGA-PANCAN Data Set- **Input:** 20502 gene expression - **Output:** Classification, BRCA (300), KIRC (146), LUAD (141), PRAD (136), COAD (78). Preprocess Data
###Code
raw_data = pd.read_csv(RAW_DATA_PATH)
raw_data.drop(columns=[raw_data.columns[0]], inplace=True)
raw_data.head(2)
# List of input features
feature_col_names = list(raw_data.columns)
feature_col_names.remove(target_col_name)
# Encode target data
# class_0 = 'BRCA'
# class_1 = 'KIRC'
# class_2 ='LUAD'
# class_3 = 'PRAD'
# class_4 = 'COAD'
raw_data[target_col_name].replace({'BRCA':0, 'KIRC':1, 'LUAD':2, 'PRAD':3, 'COAD':4}, inplace=True)
# Seperate input features and target column
X = raw_data.drop(columns=[target_col_name]).values
y = raw_data[target_col_name].values
# Normalise input features i.e. scale attributes so that theyre 0-1 so that larger weights do not carry more signifcance in the network
from sklearn.preprocessing import MinMaxScaler
scaler = MinMaxScaler()
X = scaler.fit_transform(X)
# Store preprocessed data
data = pd.DataFrame(X, columns=feature_col_names)
data[target_col_name] = y
data.head(3)
assert data.columns[-1]==target_col_name, 'Target column must be last column in DataFrame'
###Output
_____no_output_____
###Markdown
Save Clean Data
###Code
# Initialise new empty dataset folder
from model.generation.helpers import init_dataset_dir
path_to_data_folder = '../'
init_dataset_dir.run(dataset_name=dataset_name, path_to_data_folder=path_to_data_folder)
data_path = '../' + dataset_name + '/'
# Save cleaned data
data.to_csv(data_path + 'data.csv', index=False)
###Output
_____no_output_____ |
xcorr_test.ipynb | ###Markdown
###Code
!git clone https://github.com/rsh2458/dragonfly.git
!cd dragonfly && git pull
import sys
import os
import pandas as pd
# Plot values
import plotly.graph_objects as go
from plotly.subplots import make_subplots
events=os.listdir('dragonfly/csv')
events
###Output
_____no_output_____
###Markdown
3D plotsHere are some examples of getting some data into our system, and plotting them in 3d.
###Code
def getPE(p, e):
p=str(p)
e=str(e)
data = pd.read_csv('dragonfly/csv/'+p+'-'+e+'_CSV.txt',sep=';')
df = pd.DataFrame(data)
return df
###Output
_____no_output_____
###Markdown
It's hard to create this plots with now real examples, [Scattter3d Examples](https://www.programcreek.com/python/example/103209/plotly.graph_objs.Scatter3d) shows some examples.
###Code
p=str(34)
e=str(53)
df=getPE(p,e)
import plotly.graph_objects as go
from plotly.subplots import make_subplots
layout = go.Layout(
# width=1024,
# height=1024,
scene = dict(
aspectmode='data',
xaxis = dict(title='x'),
yaxis = dict(title='y'),
zaxis = dict(title='height'))
)
fig=go.Figure(layout=layout)
fig1=go.Scatter3d(x=df['xt_avg_' + p],
y=df['yt_avg_' + p],
z=df['zt_avg_' + p])
fig.add_trace(fig1)
fig.add_scatter3d(
x=df['xt_avg_' + e],
y=df['yt_avg_' + e],
z=df['zt_avg_' + e])
#fig.add_scatter3d(df,
# x='xt_' + p,
# y='yt_' + p,
# z='zt_' + p)
#fig1.show()
###Output
_____no_output_____
###Markdown
GITHUB Library integrationThis example shows how you can add external, github libraries into your notebook. The idea is that, besides cloning the data, you need to append the new github data into the path where python searches for it's libraries.
###Code
!git clone https://github.com/trichter/xcorr.git foobar
sys.path.append('foobar')
sys.path
###Output
fatal: destination path 'foobar' already exists and is not an empty directory.
###Markdown
Now, you can import functions from the library that you've already defined above. The example below, imports some functions from the `xcorr.py` file that's found in the `foobar` directory that we've added.
###Code
import matplotlib.pyplot as plt
import numpy as np
from xcorr import correlate_maxlag, correlate_template, get_lags
np.random.seed(26)
N = 200
maxlag = 50
a = np.random.random(N)
start = N // 4
b = a[start:-start]
cc1 = correlate_maxlag(a, b, maxlag)
cc2 = correlate_template(a, b)
grid = plt.GridSpec(2, 2, wspace=0.4, hspace=0.3)
ax1 = plt.subplot(grid[0, 0:])
ax2 = plt.subplot(grid[1, 0])
ax3 = plt.subplot(grid[1, 1], sharey=ax2)
ax1.plot(np.arange(len(a)), a, label='signal a')
ax1.plot(np.arange(len(b)) + start, b, label='signal b')
ax2.plot(get_lags(cc1), cc1)
ax3.plot(cc2)
ax1.legend(loc=3)
kw = dict(xy=(0.05, 0.95), xycoords='axes fraction', va='top')
ax2.annotate('correlate_maxlag(a, b, {})'.format(maxlag), **kw)
ax3.annotate('correlate_template(a, b)', **kw)
plt.savefig('xcorr_example.png')
plt.show()
###Output
_____no_output_____ |
match cpr and eddy info and output file_updatexarray.ipynb | ###Markdown
Put files together
###Code
file1 = filename_cpr_expanded+'aviso'+'.nc'
file2 = filename_cpr_expanded+'wnd'+'.nc'
file3 = filename_cpr_expanded+'sst'+'.nc'
ds = xr.open_dataset(file1)
ds2 = xr.open_dataset(file2)
for var in ds2:
if not var in ds:
ds[var]=ds2[var]
ds2 = xr.open_dataset(file3)
for var in ds2:
if not var in ds:
ds[var]=ds2[var]
ds.to_netcdf(filename_cpr_expanded+'.nc')
df_bird = ds.to_dataframe()
df_bird.to_csv(filename_cpr_expanded+'.csv')
ds
###Output
_____no_output_____
###Markdown
collocate with eddies
###Code
ds_npac_eddy = xr.open_dataset(filename_northpac_eddies).rename({'Longitude':'lon','Latitude':'lat'})
for var in ds_npac_eddy:
ds_npac_eddy = ds_npac_eddy.rename({var:str('cpr_eddy_data_'+var)})
ds_cpr_eddy = xr.open_dataset(filename_cpr_eddy)
for var in ds_cpr_eddy:
if var[0]=='s':
ds_cpr_eddy = ds_cpr_eddy.rename({var:str('cpr_eddy_data_'+var[10:])})
else:
ds_cpr_eddy = ds_cpr_eddy.rename({var:str('cpr_eddy_data_'+var[4:])})
ds_npac_eddy.close()
ds_cpr_eddy.close()
print(ds_npac_eddy)
print(ds_cpr_eddy)
###Output
_____no_output_____
###Markdown
make single array with all info
###Code
ilen = len(ds_cpr_eddy.cpr_eddy_data_index)
for var in ds_npac_eddy:
if not var=='cpr_eddy_data_time':
ds_cpr_eddy[var]=xr.DataArray(np.nan*np.empty(ilen, dtype=str(ds_npac_eddy[var].dtype)), dims=('z'))
ds_cpr_eddy[var].attrs=ds_npac_eddy[var].attrs
else:
ds_cpr_eddy[var]=xr.DataArray(np.empty(ilen, dtype=str(ds_npac_eddy[var].dtype)), dims=('z'))
for i in range(ilen):
ii = ds_cpr_eddy.cpr_eddy_data_index[i]
for var in ds_npac_eddy:
ds_cpr_eddy[var][i]=ds_npac_eddy[var][ii]
###Output
_____no_output_____
###Markdown
check where double crossing
###Code
#proc_cpr ==1 whre distance is GREATER than radius of eddy
#proc_cpr = np.where(ds_cpr_eddy.cpr_eddy_data_distance>ds_cpr_eddy.cpr_eddy_data_radius,1,0)
#proc_cpr
ilen = len(ds_cpr_eddy.cpr_eddy_data_track)
ds_cpr_eddy['num_cross']=xr.DataArray(np.zeros(ilen, dtype='int32'), dims=('z'))
ds_cpr_eddy['num_cross'].attrs={'description':'how many times eddy crossed by cpr data'}
#calculate where cpr in eddy radius, put nan where not in eddy
subset = ds_cpr_eddy.where(ds_cpr_eddy.cpr_eddy_data_distance<ds_cpr_eddy.cpr_eddy_data_radius)
#find unique eddy track ids
u, indices = np.unique(ds_cpr_eddy.cpr_eddy_data_track, return_index=True)
#cycle through each unique eddy id to find unique years
for i in range(len(u)):
ind = np.where(subset.cpr_eddy_data_track==u[i])
ind_tem = np.where(ds_cpr_eddy.cpr_eddy_data_track==u[i])
tem = subset.cpr_eddy_data_year[ind]
u1, indices1 = np.unique(tem, return_index=True)
ds_cpr_eddy.num_cross[ind_tem]=len(u1)
ds_cpr_eddy.num_cross.plot()
ds_cpr_eddy
ds_env = xr.open_dataset(filename_cpr_expanded+'.nc')
ds_env.close()
ds_env = ds_env.rename({'index':'z'})
ds_env
for var in ds_env:
var_tem = var
if not var_tem[0:3]=='cpr':
var_tem = 'cpr_sample_'+var
ds_cpr_eddy[var_tem]=xr.DataArray(ds_env[var].data, dims=('z'))
ds_cpr_eddy[var_tem].attrs=ds_env[var].attrs
ds_cpr_eddy
filename_cpr_expanded='F:/data/project_data/NASA_biophysical/collocated_data/CPR/All CPR Sample catalogue with eddy info_2020_10_06'
ds_cpr_eddy.to_netcdf(filename_cpr_expanded+'.nc')
df_bird = ds_cpr_eddy.to_dataframe()
df_bird.to_csv(filename_cpr_expanded+'.csv')
filename_cpr_expanded_netcdf='F:/data/project_data/NASA_biophysical/collocated_data/CPR/All CPR Sample catalogue with eddy info4.nc'
ds_tem = xr.open_dataset(filename_cpr_expanded_netcdf)
ds_tem.close()
ds_tem.num_cross.plot()
print(ds_tem)
#chech on subset
#subset = ds_cpr_eddy.where(ds_cpr_eddy.cpr_eddy_data_distance<ds_cpr_eddy.cpr_eddy_data_radius)
#for i in range(100):
# print(subset.cpr_eddy_data_track[i].data,ds_cpr_eddy.cpr_eddy_data_distance[i].data,ds_cpr_eddy.cpr_eddy_data_radius[i].data)
#print(filename_eddy)
#ds_unique = xr.open_dataset(filename_eddy,group='eddy_data')
#ds_unique.close()
#ds_unique
ds_cpr_eddy.cpr_eddy_data_cyclonic_type.plot()
plt.plot(ds_cpr_eddy.cpr_eddy_data_distance)
plt.plot(ds_cpr_eddy.cpr_eddy_data_radius)
for i in range(10):
print(ds_cpr_eddy.cpr_eddy_data_distance[i].data,ds_cpr_eddy.cpr_eddy_data_radius[i].data)
#cpr_eddy_data_speed_radius_deg[index]=speed_radius_eddy[index_eddy]*cos(radians(lats_eddy[index_eddy]))/111.0
#proc_cpr ==1 whre distance is GREATER than radius of eddy
proc_cpr = np.where(ds_cpr_eddy.cpr_eddy_data_distance>ds_cpr_eddy.cpr_eddy_data_radius,1,0)
proc_cpr
import numpy.ma as ma
from numpy import *
#remove masked values from data
data = np.ma.filled(cpr_sample_ucur, np.nan)
data[isnan(data)] = -9999
cpr_sample_ucur2=data
data = np.ma.filled(cpr_sample_vcur, np.nan)
data[isnan(data)] = -9999
cpr_sample_vcur2=data
data = np.ma.filled(cpr_sample_ucur_clim, np.nan)
data[isnan(data)] = -9999
cpr_sample_ucur_clim2=data
data = np.ma.filled(cpr_sample_vcur_clim, np.nan)
data[isnan(data)] = -9999
cpr_sample_vcur_clim2=data
data = np.ma.filled(cpr_sample_sst, np.nan)
data[isnan(data)] = -9999
cpr_sample_sst2=data
data = np.ma.filled(cpr_sample_sst_clim, np.nan)
data[isnan(data)] = -9999
cpr_sample_sst_clim2=data
data = np.ma.filled(cpr_sample_uwnd, np.nan)
data[isnan(data)] = -9999
cpr_sample_uwnd2=data
data = np.ma.filled(cpr_sample_uwnd_clim, np.nan)
data[isnan(data)] = -9999
cpr_sample_uwnd_clim2=data
data = np.ma.filled(cpr_sample_vwnd, np.nan)
data[isnan(data)] = -9999
cpr_sample_vwnd2=data
data = np.ma.filled(cpr_sample_vwnd_clim, np.nan)
data[isnan(data)] = -9999
cpr_sample_vwnd_clim2=data
#print(shape(df))
#print(shape(cpr_sample_jday))
#df_time=[0] * (ilen_cpr)
#print(ilen_cpr)
#for index in range(0,ilen_cpr):
# df_time[index] = dt.datetime(cpr_sample_year[index],cpr_sample_month[index],cpr_sample_day[index])
#df_vars=['Sample ID','day','month','year','lat','lon','already processed?','ETOPO_depth (m) nearest neighbor','ETOPO_depth (m) interp','SST CMC 2.0','SST Climatology CMC 2.0','U_wnd CCMC m/s','V_wnd CCMC m/s','Climatology U_wnd CCMC m/s','Climatology V_wnd CCMC m/s','U_cur oscar m/s','V_cur oscar m/s','Climatology U_cur oscar m/s','Climatology V_cur oscar m/s']
#print(shape(df_time))
#print(shape(df_vars))
#print(type(df_time))
#type(df)
#print(type(df))
#print(shape(df))
##print(shape(df_time))
#print(shape(df_vars))
#df_out = xr.DataArray(df, coords=[df_time,df_vars]) #, dims=['time' 'vars'])
#df_out.to_netcdf(filename_cpr_expanded_netcdf)
#df_test=xr.open_dataset(filename_cpr_expanded_netcdf)
#df_test
print(len(cpr_sample_sst2))
print(cpr_sample_sst2[-11:-1])
#output in netcdf
#get the values for a given column
#f.close()
filename_cpr_expanded_netcdf='f:/data/eddy/collocated_data/All CPR Sample catalogue with eddy info4.nc'
print(type(cpr_sample_id))
print(len(cpr_sample_id))
print(cpr_sample_ucur_clim[9:10])
print(cpr_sample_ucur[9:10])
#f.close()
ilen_cpr=len(cpr_sample_id)
f = Dataset(filename_cpr_expanded_netcdf,'w', format='NETCDF4') #'w' stands for write
#tempgrp = f.createGroup('CPR_data')
f.createDimension('z', ilen_cpr)
cpr_sample_id_netcdf = f.createVariable('cpr_sample_id', 'str', 'z')
cpr_sample_day_netcdf = f.createVariable('cpr_sample_day', 'i4', 'z')
cpr_sample_month_netcdf = f.createVariable('cpr_sample_month', 'i4', 'z')
cpr_sample_year_netcdf =f.createVariable('cpr_sample_year', 'i4', 'z')
cpr_sample_lat_netcdf = f.createVariable('cpr_sample_lat', 'f4', 'z')
cpr_sample_lon_netcdf = f.createVariable('cpr_sample_lon', 'f4', 'z')
cpr_sample_proc_netcdf = f.createVariable('cpr_sample_proc', 'c', 'z')
eddy_dist_netcdf = f.createVariable('cpr_eddy_data_distance', 'f4', 'z')
eddy_dist_from_land_netcdf = f.createVariable('cpr_eddy_data_distance_from_land', 'f4', 'z')
eddy_rad_netcdf = f.createVariable('cpr_eddy_data_radius', 'f4', 'z')
eddy_lon_netcdf = f.createVariable('cpr_eddy_data_lons', 'f4', 'z')
eddy_lat_netcdf = f.createVariable('cpr_eddy_data_lats', 'f4', 'z')
eddy_time_netcdf = f.createVariable('cpr_eddy_data_time', 'f4', 'z')
eddy_amp_netcdf = f.createVariable('cpr_eddy_data_amplitude', 'f4', 'z')
eddy_spd_netcdf = f.createVariable('cpr_eddy_data_speed_average', 'f4', 'z')
eddy_rad2_netcdf = f.createVariable('cpr_eddy_data_speed_radius', 'f4', 'z')
eddy_cyc_netcdf = f.createVariable('cpr_eddy_data_cyclonic_type', 'i4', 'z')
eddy_id_netcdf = f.createVariable('cpr_eddy_data_track_id', 'i4', 'z')
eddy_tdy_netcdf = f.createVariable('cpr_eddy_data_total_days', 'i4', 'z')
eddy_ob_netcdf = f.createVariable('cpr_eddy_data_ob_num', 'i4', 'z')
eddy_yr_netcdf = f.createVariable('cpr_eddy_data_year', 'i4', 'z')
eddy_dy_netcdf = f.createVariable('cpr_eddy_data_idyjl', 'i4', 'z')
eddy_crossings_netcdf = f.createVariable('num_cross', 'i4', 'z')
ucur_netcdf = f.createVariable('cpr_sample_oscar_ucur', 'f4', 'z')
vcur_netcdf = f.createVariable('cpr_sample_oscar_vcur', 'f4', 'z')
ucur_clim_netcdf = f.createVariable('cpr_sample_oscar_ucur_clim', 'f4', 'z')
vcur_clim_netcdf = f.createVariable('cpr_sample_oscar_vcur_clim', 'f4', 'z')
sst_netcdf = f.createVariable('cpr_sample_cmc_sst', 'f4', 'z')
sst_clim_netcdf = f.createVariable('cpr_sample_cmc_sst_clim', 'f4', 'z')
uwnd_netcdf = f.createVariable('cpr_sample_ccmp_uwnd', 'f4', 'z')
uwnd_clim_netcdf = f.createVariable('cpr_sample_ccmp_uwnd_clim', 'f4', 'z')
vwnd_netcdf = f.createVariable('cpr_sample_ccmp_vwnd', 'f4', 'z')
vwnd_clim_netcdf = f.createVariable('cpr_sample_ccmp_vwnd_clim', 'f4', 'z')
depth_netcdf = f.createVariable('cpr_sample_ETOPO_depth', 'f4', 'z')
tem=cpr_sample_id.tolist()
print(type(tem))
print(tem[0:10])
cpr_sample_id_netcdf[:] = cpr_sample_id #tem
cpr_sample_day_netcdf[:] = cpr_sample_day
cpr_sample_month_netcdf[:] = cpr_sample_month
cpr_sample_year_netcdf[:] = cpr_sample_year
cpr_sample_lat_netcdf[:] = cpr_sample_lat
cpr_sample_lon_netcdf[:] = cpr_sample_lon
cpr_sample_proc_netcdf[:] = cpr_sample_proc
eddy_dist_netcdf[:] = cpr_eddy_data_distance
eddy_dist_from_land_netcdf[:] = cpr_eddy_data_distance_from_land
eddy_rad_netcdf[:] = cpr_eddy_data_radius
eddy_lon_netcdf[:] = cpr_eddy_data_lons
eddy_lat_netcdf[:] = cpr_eddy_data_lats
eddy_time_netcdf[:] = cpr_eddy_data_time
eddy_amp_netcdf[:] = cpr_eddy_data_amplitude
eddy_spd_netcdf[:] = cpr_eddy_data_speed_average
eddy_rad2_netcdf[:] = cpr_eddy_data_speed_radius
eddy_cyc_netcdf[:] = cpr_eddy_data_cyclonic_type
eddy_id_netcdf[:] = cpr_eddy_data_track_id
eddy_tdy_netcdf[:] = cpr_eddy_data_total_days
eddy_ob_netcdf[:] = cpr_eddy_data_ob_num
eddy_yr_netcdf[:] = cpr_eddy_data_year
eddy_dy_netcdf[:] = cpr_eddy_data_idyjl
eddy_crossings_netcdf[:] = num_cross
ucur_netcdf[:] =cpr_sample_ucur2
vcur_netcdf[:] =cpr_sample_vcur2
ucur_clim_netcdf[:] = cpr_sample_ucur_clim2
vcur_clim_netcdf[:] = cpr_sample_vcur_clim2
sst_netcdf[:] =cpr_sample_sst2
sst_clim_netcdf[:] =cpr_sample_sst_clim2
uwnd_netcdf[:] =cpr_sample_uwnd2
uwnd_clim_netcdf[:] =cpr_sample_uwnd_clim2
vwnd_netcdf[:] =cpr_sample_vwnd2
vwnd_clim_netcdf[:] =cpr_sample_vwnd_clim2
depth_netcdf[:] =cpr_sample_depth_exact
f.close()
df_test=xr.open_dataset(filename_cpr_expanded_netcdf)
df_test.cpr_sample_id
#into excel file
#from pandas import DataFrame
#tem=cpr_sample_id.tolist()
#df = DataFrame({'CPR Sample ID': tem, 'CPR sample day': cpr_sample_day})
#print(filename_cpr_expanded)
#df.to_excel('filename_cpr_expanded,', sheet_name='sheet1', index=False)
#find number of crossings
print(cpr_eddy_data_speed_radius[1],cpr_eddy_data_speed_radius_deg[1])
filename_cpr
wb = openpyxl.load_workbook(filename_cpr)
sheet=wb['2000_2016'] #sheet = wb.get_sheet_by_name('2000_2016')
for i in range(0,1):
sheet['A' + str(i + 1)].value = 'cpr_sample_id'
sheet['B' + str(i + 1)].value = 'cpr_sample_day'
sheet['C' + str(i + 1)].value = 'cpr_sample_month'
sheet['D' + str(i + 1)].value = 'cpr_sample_year'
sheet['E' + str(i + 1)].value = 'cpr_sample_lat'
sheet['F' + str(i + 1)].value = 'cpr_sample_lon'
sheet['G' + str(i + 1)].value = 'cpr_sample_proc'
sheet['H' + str(i + 1)].value = 'eddy_data_track_id'
sheet['I' + str(i + 1)].value = 'eddy_data_distance'
sheet['J' + str(i + 1)].value = 'eddy_data_distance_from_land'
sheet['K' + str(i + 1)].value = 'eddy_data_radius'
sheet['L' + str(i + 1)].value = 'eddy_data_lons'
sheet['M' + str(i + 1)].value = 'eddy_data_lats'
sheet['N' + str(i + 1)].value = 'eddy_data_time'
sheet['O' + str(i + 1)].value = 'eddy_data_amplitude'
sheet['P' + str(i + 1)].value = 'eddy_data_speed_average'
sheet['Q' + str(i + 1)].value = 'eddy_data_speed_radius'
sheet['R' + str(i + 1)].value = 'eddy_data_cyclonic_type'
sheet['S' + str(i + 1)].value = 'eddy_data_total_days'
sheet['T' + str(i + 1)].value = 'eddy_data_ob_num'
sheet['U' + str(i + 1)].value = 'eddy_data_year'
sheet['V' + str(i + 1)].value = 'eddy_data_idyjl'
sheet['W' + str(i + 1)].value = 'number_times_cpr_crosses_this_eddy'
sheet['X' + str(i + 1)].value = 'cpr_sample_oscar_ucur'
sheet['Y' + str(i + 1)].value = 'cpr_sample_oscar_vcur'
sheet['Z' + str(i + 1)].value = 'cpr_sample_oscar_ucur_clim'
sheet['AA' + str(i + 1)].value = 'cpr_sample_oscar_vcur_clim'
sheet['AB' + str(i + 1)].value = 'cpr_sample_cmc_sst'
sheet['AC' + str(i + 1)].value = 'cpr_sample_cmc_sst_clim'
sheet['AD' + str(i + 1)].value = 'cpr_sample_ccmp_uwnd'
sheet['AE' + str(i + 1)].value = 'cpr_sample_ccmp_uwnd_clim'
sheet['AF' + str(i + 1)].value = 'cpr_sample_ccmp_vwnd'
sheet['AG' + str(i + 1)].value = 'cpr_sample_ccmp_vwnd_clim'
sheet['AH' + str(i + 1)].value = 'cpr_sample_ETOPO_depth'
ilen_cpr=len(cpr_sample_id)
cpr_eddy_data_lons2=cpr_eddy_data_lons
for i in range(0,ilen):
if cpr_eddy_data_lons[i]>180.:
cpr_eddy_data_lons2[i]=cpr_eddy_data_lons[i]-360.
for i in range(0,ilen_cpr):
sheet['A' + str(i + 2)].value = cpr_sample_id[i]
sheet['B' + str(i + 2)].value = cpr_sample_day[i]
sheet['C' + str(i + 2)].value = cpr_sample_month[i]
sheet['D' + str(i + 2)].value = cpr_sample_year[i]
sheet['E' + str(i + 2)].value = cpr_sample_lat[i]
sheet['F' + str(i + 2)].value = cpr_sample_lon[i]
sheet['G' + str(i + 2)].value = cpr_sample_proc[i]
sheet['H' + str(i + 2)].value = cpr_eddy_data_track_id[i]
sheet['I' + str(i + 2)].value = cpr_eddy_data_distance[i]
sheet['J' + str(i + 2)].value = cpr_eddy_data_distance_from_land[i]
sheet['K' + str(i + 2)].value = cpr_eddy_data_radius[i]
sheet['L' + str(i + 2)].value = cpr_eddy_data_lons2[i]
sheet['M' + str(i + 2)].value = cpr_eddy_data_lats[i]
sheet['N' + str(i + 2)].value = cpr_eddy_data_time[i]
sheet['O' + str(i + 2)].value = cpr_eddy_data_amplitude[i]
sheet['P' + str(i + 2)].value = cpr_eddy_data_speed_average[i]
sheet['Q' + str(i + 2)].value = cpr_eddy_data_speed_radius[i]
sheet['R' + str(i + 2)].value = cpr_eddy_data_cyclonic_type[i]
sheet['S' + str(i + 2)].value = cpr_eddy_data_total_days[i]
sheet['T' + str(i + 2)].value = cpr_eddy_data_ob_num[i]
sheet['U' + str(i + 2)].value = cpr_eddy_data_year[i]
sheet['V' + str(i + 2)].value = cpr_eddy_data_idyjl[i]
sheet['W' + str(i + 2)].value = num_cross[i]
sheet['X' + str(i + 2)].value = cpr_sample_ucur2[i]
sheet['Y' + str(i + 2)].value = cpr_sample_vcur2[i]
sheet['Z' + str(i + 2)].value = cpr_sample_ucur_clim2[i]
sheet['AA' + str(i + 2)].value = cpr_sample_vcur_clim2[i]
sheet['AB' + str(i + 2)].value = cpr_sample_sst2[i]
sheet['AC' + str(i + 2)].value = cpr_sample_sst_clim2[i]
sheet['AD' + str(i + 2)].value = cpr_sample_uwnd2[i]
sheet['AE' + str(i + 2)].value = cpr_sample_uwnd_clim2[i]
sheet['AF' + str(i + 2)].value = cpr_sample_vwnd2[i]
sheet['AG' + str(i + 2)].value = cpr_sample_vwnd_clim2[i]
sheet['AH' + str(i + 2)].value = cpr_sample_depth_exact[i]
wb.save(filename_cpr_expanded)
f = plt.figure()
clats=[]
clons=[]
clats2=[]
clons2=[]
for i in range(0,len(cpr_sample_lat)):
tem=cpr_sample_proc[i]
if cpr_eddy_data_distance[i]<=cpr_eddy_data_radius[i] and tem=='Yes' :
clats.append(cpr_sample_lat[i])
clons.append(cpr_sample_lon[i])
elif cpr_eddy_data_distance[i]<=cpr_eddy_data_radius[i] and tem=='No' :
clats2.append(cpr_sample_lat[i])
clons2.append(cpr_sample_lon[i])
map = Basemap(projection='merc', lat_0 = 45, lon_0 = -130, resolution = 'l', area_thresh = 0.1,
llcrnrlon=-180.25, llcrnrlat=30.0,urcrnrlon=-115.25, urcrnrlat=62.75)
#map.drawcoastlines()
#map.drawcountries()
map.fillcontinents(color = 'coral')
#map.drawmapboundary()
#xx=cpr_sample_lon[i]
#map.plot(xx,yy,'ko',markersize=24)
x,y = map(clons,clats)
map.plot(x, y, 'bo', markersize=.2)
x,y = map(clons2,clats2)
map.plot(x, y, 'ro', markersize=.2)
plt.show()
f.savefig("F:/data/eddy/figures/all_collocated_cpr_data.pdf", bbox_inches='tight')
print(cpr_eddy_data_speed_radius[1],cpr_eddy_data_speed_radius_deg[1])
f = plt.figure()
clats=[]
clons=[]
clats2=[]
clons2=[]
elats=[]
elons=[]
erads=[]
erads2=[]
espokes=[]
ecross=[]
for i in range(0,len(cpr_sample_lat)):
tem=cpr_sample_proc[i]
if cpr_eddy_data_distance[i]<=cpr_eddy_data_radius[i] and tem=='Yes' :
clats.append(cpr_sample_lat[i])
clons.append(cpr_sample_lon[i])
elats.append(cpr_eddy_data_lats[i])
elons.append(cpr_eddy_data_lons2[i])
erads.append(cpr_eddy_data_speed_radius[i])
erads2.append(cpr_eddy_data_speed_radius_deg[i])
ecross.append(num_cross[i])
espokes.append(50)
elif cpr_eddy_data_distance[i]<=cpr_eddy_data_radius[i] and tem=='No' :
clats2.append(cpr_sample_lat[i])
clons2.append(cpr_sample_lon[i])
elats.append(cpr_eddy_data_lats[i])
elons.append(cpr_eddy_data_lons2[i])
erads.append(cpr_eddy_data_speed_radius[i])
erads2.append(cpr_eddy_data_speed_radius_deg[i])
ecross.append(num_cross[i])
espokes.append(50)
print(cpr_eddy_data_speed_radius[1],cpr_eddy_data_speed_radius_deg[1])
def shoot(lon, lat, azimuth, maxdist=None):
"""Shooter Function
Original javascript on http://williams.best.vwh.net/gccalc.htm
Translated to python by Thomas Lecocq
"""
glat1 = lat * np.pi / 180.
glon1 = lon * np.pi / 180.
s = maxdist / 1.852
faz = azimuth * np.pi / 180.
EPS= 0.00000000005
if ((np.abs(np.cos(glat1))<EPS) and not (np.abs(np.sin(faz))<EPS)):
alert("Only N-S courses are meaningful, starting at a pole!")
a=6378.13/1.852
f=1/298.257223563
r = 1 - f
tu = r * np.tan(glat1)
sf = np.sin(faz)
cf = np.cos(faz)
if (cf==0):
b=0.
else:
b=2. * np.arctan2 (tu, cf)
cu = 1. / np.sqrt(1 + tu * tu)
su = tu * cu
sa = cu * sf
c2a = 1 - sa * sa
x = 1. + np.sqrt(1. + c2a * (1. / (r * r) - 1.))
x = (x - 2.) / x
c = 1. - x
c = (x * x / 4. + 1.) / c
d = (0.375 * x * x - 1.) * x
tu = s / (r * a * c)
y = tu
c = y + 1
while (np.abs (y - c) > EPS):
sy = np.sin(y)
cy = np.cos(y)
cz = np.cos(b + y)
e = 2. * cz * cz - 1.
c = y
x = e * cy
y = e + e - 1.
y = (((sy * sy * 4. - 3.) * y * cz * d / 6. + x) *
d / 4. - cz) * sy * d + tu
b = cu * cy * cf - su * sy
c = r * np.sqrt(sa * sa + b * b)
d = su * cy + cu * sy * cf
glat2 = (np.arctan2(d, c) + np.pi) % (2*np.pi) - np.pi
c = cu * cy - su * sy * cf
x = np.arctan2(sy * sf, c)
c = ((-3. * c2a + 4.) * f + 4.) * c2a * f / 16.
d = ((e * cy * c + cz) * sy * c + y) * sa
glon2 = ((glon1 + x - (1. - c) * d * f + np.pi) % (2*np.pi)) - np.pi
baz = (np.arctan2(sa, b) + np.pi) % (2 * np.pi)
glon2 *= 180./np.pi
glat2 *= 180./np.pi
baz *= 180./np.pi
return (glon2, glat2, baz)
def equi(m, centerlon, centerlat, radius, *args, **kwargs):
glon1 = centerlon
glat1 = centerlat
X = []
Y = []
for azimuth in range(0, 360):
glon2, glat2, baz = shoot(glon1, glat1, azimuth, radius)
X.append(glon2)
Y.append(glat2)
X.append(X[0])
Y.append(Y[0])
#m.plot(X,Y,**kwargs) #Should work, but doesn't...
X,Y = m(X,Y)
plt.plot(X,Y,**kwargs)
fig = plt.figure(figsize=(11.7,8.3))
#Custom adjust of the subplots
plt.subplots_adjust(left=0.05,right=0.95,top=0.90,bottom=0.05,wspace=0.15,hspace=0.05)
ax = plt.subplot(111)
print(cpr_eddy_data_speed_radius[1],cpr_eddy_data_speed_radius_deg[1])
#Let's create a basemap of the world
m = Basemap(projection='merc', lat_0 = 45, lon_0 = -130, resolution = 'l', area_thresh = 0.1,
llcrnrlon=-180.25, llcrnrlat=30.0,urcrnrlon=-115.25, urcrnrlat=62.75)
m.fillcontinents(color='coral',lake_color='white')
x,y = m(clons,clats)
m.plot(x, y, 'bo', markersize=.2)
x,y = m(clons2,clats2)
m.plot(x, y, 'ro', markersize=.2)
for i in range(0,len(erads)):
centerlon = elons[i]
centerlat = elats[i]
radius = erads[i]
if abs(centerlon-erads2[i])<177:
equi(m, centerlon, centerlat, radius,lw=1.)
plt.savefig("F:/data/eddy/figures/all_collocated_cpr_data3.pdf",dpi=300)
plt.show()
#make a list of eddy id that have two visits
#make icheck have a 1 where trackid has more than two visits
icheck=[]
for i in range(0,len(cpr_sample_lat)):
if num_cross[i]>1:
itest=0
for i2 in range(0,len(icheck)):
if icheck[i2]==cpr_eddy_data_track_id[i]:
itest=1
if itest==0:
icheck.append(cpr_eddy_data_track_id[i])
print(icheck)
#now just do for eddies that have 2 visits
for i_tem in range(0,len(icheck)):
tem_id=icheck[i_tem]
#get all lat/lon for specific eddy to pring
alats=[]
alons=[]
for i in range(0,len(lons_eddy)):
if tem_id==track_eddy[i]:
alats.append(lats_eddy[i])
if lons_eddy[i]<=180:
alons.append(lons_eddy[i])
if lons_eddy[i]>180:
alons.append(lons_eddy[i]-360)
clats=[]
clons=[]
clats2=[]
clons2=[]
elats=[]
elons=[]
erads=[]
erads2=[]
for i in range(0,len(cpr_sample_lat)):
tem=cpr_sample_proc[i]
if cpr_sample_lon[i]>0:
continue
if cpr_eddy_data_distance[i]<=cpr_eddy_data_radius[i] \
and tem=='Yes' and cpr_eddy_data_track_id[i]==tem_id:
clats.append(cpr_sample_lat[i])
clons.append(cpr_sample_lon[i])
elats.append(cpr_eddy_data_lats[i])
elons.append(cpr_eddy_data_lons2[i])
erads.append(cpr_eddy_data_speed_radius[i])
erads2.append(cpr_eddy_data_speed_radius_deg[i])
elif cpr_eddy_data_distance[i]<=cpr_eddy_data_radius[i] \
and tem=='No' and cpr_eddy_data_track_id[i]==tem_id:
clats2.append(cpr_sample_lat[i])
clons2.append(cpr_sample_lon[i])
elats.append(cpr_eddy_data_lats[i])
elons.append(cpr_eddy_data_lons2[i])
erads.append(cpr_eddy_data_speed_radius[i])
erads2.append(cpr_eddy_data_speed_radius_deg[i])
if len(clons2)<1 and len(clats)<1:
continue
fig = plt.figure(figsize=(11.7,8.3))
#Custom adjust of the subplots
plt.subplots_adjust(left=0.05,right=0.95,top=0.90,bottom=0.05,wspace=0.15,hspace=0.05)
ax = plt.subplot(111)
print(cpr_eddy_data_speed_radius[1],cpr_eddy_data_speed_radius_deg[1])
#Let's create a basemap of the world
m = Basemap(projection='merc', lat_0 = 45, lon_0 = -130, resolution = 'l', area_thresh = 0.1,
llcrnrlon=-180.25, llcrnrlat=30.0,urcrnrlon=-115.25, urcrnrlat=62.75)
m.fillcontinents(color='coral',lake_color='white')
x,y = m(clons,clats)
m.plot(x, y, 'bo', markersize=.2)
x,y = m(clons2,clats2)
m.plot(x, y, 'ro', markersize=.2)
x,y = m(alons,alats)
m.plot(x, y, 'k')
for i in range(0,len(erads)):
centerlon = elons[i]
centerlat = elats[i]
radius = erads[i]
if abs(centerlon-erads2[i])<177:
equi(m, centerlon, centerlat, radius,lw=1.)
# plt.show()
fig_fname="F:/data/eddy/figures/all_collocated_cpr_data_doubles" + str(tem_id) + ".pdf"
plt.savefig(fig_fname,dpi=300)
print(fig_fname)
print(len(alons))
print(alons[1:200])
for i in range(0,len(cpr_sample_lat)):
tem=cpr_sample_proc[i]
if cpr_eddy_data_track_id[i]==tem_id:
print(i,cpr_eddy_data_distance[i],cpr_eddy_data_radius[i],tem)
print('clons2',clons2)
print('clats2',clats2)
print(len(clons))
print('clons',clons)
print('clats',clats)
filename='F:/data/eddy/collocated_data/All CPR Sample catalogue with eddy info4.nc'
ds_eddy = xr.open_dataset(filename)
ds_eddy
ds_eddy.cpr_sample_id[2].values
print(type(ds_eddy))
fig, (ax1) = plt.subplots(nrows=1, figsize=(6, 5.4))
#f = plt.figure()
#map = Basemap(projection='merc', lat_0 = 45, lon_0 = -130, resolution = 'l', area_thresh = 0.1,
# llcrnrlon=-180.25, llcrnrlat=30.0,urcrnrlon=-115.25, urcrnrlat=62.75)
#map.fillcontinents(color = 'coral')
#x,y = map(ds_eddy.cpr_sample_lon.values,ds_eddy.cpr_sample_lat.values)
d2=ds_eddy.where(ds_eddy.cpr_sample_lon<0)
print(len(d2))
print(type(d2))
ax1.scatter(d2.cpr_sample_lon.values,d2.cpr_sample_lat.values, c = cpr_sample_depth_exact,s=1)
#plt.scatter(ds_eddy.cpr_sample_lon.values,ds_eddy.cpr_sample_lat.values, c = ds_eddy.cpr_sample_ETOPO_depth.values)
#plt.plot(ds_eddy.cpr_sample_lon.values[0:1000],ds_eddy.cpr_sample_ETOPO_depth.values[0:1000],'.')
plt.show()
f.savefig('F:/data/eddy/collocated_data/depth_image.png', transparent=False, format='png')
fig, (ax1) = plt.subplots(nrows=1, figsize=(6, 5.4))
im = ax1.imshow(ds_topo.z[7000:9500,0:4500], interpolation='bilinear',vmin=-7000.0, vmax=1.0,aspect='auto',origin='lower')
plt.show()
ds_eddy.cpr_sample_ETOPO_depth.values[0:10]
ds_eddy.cpr_sample_id[0:1000]
dir_pattern_zarr = 'F:/data/sat_data/sst/cmc/zarr/'
ds_sst= xr.open_zarr(dir_pattern_zarr)
ds_sst
###Output
_____no_output_____ |
Preco_a_termo.ipynb | ###Markdown
###Code
#encoding: utf-8
#encoding: iso-8859-1
#encoding: win-1252
#Atividade desenvolvida para a Curso Derivativos e Gestão de Carteiras
#The University of Campinas -UNICAMP
#Desenvolvido por: José Wellington Albuquerque
# encoding: utf-8
# encoding: iso-8859-1
# encoding: win-1252
from math import e
S_o = input("Qual o valor do contrato a termo sobre o ativo de investimento em R$? \n")
t_inicial = input("Quantos meses: \n")
t_final = 12
juros_aa = input("Digite um valor do juros ao ano: \n")
n=((float(juros_aa))*(int(t_inicial))/(int(t_final)))
F_o=(float(S_o))*(e**n)
print ('O preço futuro é de R$',("%.2f"% F_o), 'Reais!\n\n')
if (F_o > (float(S_o))*(e**(float(juros_aa))*(int(t_final)))):
print ("Compre o ativo e venda a descoberto com contratos a termo sobre o ativo!!")
else:
print ("Venda o ativo a descoberto e faça contratos a termo comprados sobre ele.!!\n")
###Output
Qual o valor do contrato a termo sobre o ativo de investimento em R$?
32
Quantos meses:
3
Digite um valor do juros ao ano:
0.05
O preço futuro é de R$ 32.40 Reais!
Venda o ativo a descoberto e faça contratos a termo comprados sobre ele.!!
|
Fase 2 - Manejo de datos y optimizacion/Tema 06 - Programacion de funciones/Ejercicios/Enunciados.ipynb | ###Markdown
Tema 06: Programación de funciones (Enunciados)*Nota: Estos ejercicios son optativos para hacer al final de la unidad y están pensados para apoyar tu aprendizaje*. **1) Realiza una función llamada area_rectangulo() que devuelva el área del rectangulo a partir de una base y una altura. Calcula el área de un rectángulo de 15 de base y 10 de altura.***Nota: El área de un rectángulo se obtiene al multiplicar la base por la altura.*
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**2) Realiza una función llamada area_circulo() que devuelva el área de un círculo a partir de un radio. Calcula el área de un círculo de 5 de radio: **Nota: El área de un círculo se obtiene al elevar el radio a dos y multiplicando el resultado por el número pi. Puedes utilizar el valor 3.14159 como pi o importarlo del módulo math:```pythonimport mathprint(math.pi)> 3.1415...```
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**3) Realiza una función llamada relacion() que a partir de dos números cumpla lo siguiente**:* Si el primer número es mayor que el segundo, debe devolver 1.* Si el primer número es menor que el segundo, debe devolver -1.* Si ambos números son iguales, debe devolver un 0.** Comprueba la relación entre los números: '5 y 10', '10 y 5' y '5 y 5'**
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**4) Realiza una función llamada intermedio() que a partir de dos números, devuelva su punto intermedio:***Nota: El número intermedio de dos números corresponde a la suma de los dos números dividida entre 2*** Comprueba el punto intermedio entre -12 y 24**
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**5) Realiza una función llamada recortar() que reciba tres parámetros. El primero es el número a recortar, el segundo es el límite inferior y el tercero el límite superior. La función tendrá que cumplir lo siguiente:*** Devolver el límite inferior si el número es menor que éste* Devolver el límite superior si el número es mayor que éste.* Devolver el número sin cambios si no se supera ningún límite.** Comprueba el resultado de recortar 15 entre los límites 0 y 10**
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**6) Realiza una función separar() que tome una lista de números enteros y devuelva dos listas ordenadas. La primera con los números pares, y la segunda con los números impares:**Por ejemplo: ```pythonpares, impares = separar([6,5,2,1,7])print(pares) valdría [2, 6]print(impares) valdría [1, 5, 7]```*Nota: Para ordenar una lista automáticamente puedes usar el método .sort().*
###Code
numeros = [-12, 84, 13, 20, -33, 101, 9]
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
Tema 06: Programación de funciones (Enunciados)*Nota: Estos ejercicios son optativos para hacer al final de la unidad y están pensados para apoyar tu aprendizaje*. **1) Realiza una función llamada area_rectangulo() que devuelva el área del rectangulo a partir de una base y una altura. Calcula el área de un rectángulo de 15 de base y 10 de altura.***Nota: El área de un rectángulo se obtiene al multiplicar la base por la altura.*
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**2) Realiza una función llamada area_circulo() que devuelva el área de un círculo a partir de un radio. Calcula el área de un círculo de 5 de radio: **Nota: El área de un círculo se obtiene al elevar el radio a dos y multiplicando el resultado por el número pi. Puedes utilizar el valor 3.14159 como pi o importarlo del módulo math:```pythonimport mathprint(math.pi)> 3.1415...```
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**3) Realiza una función llamada relacion() que a partir de dos números cumpla lo siguiente**:* Si el primer número es mayor que el segundo, debe devolver 1.* Si el primer número es menor que el segundo, debe devolver -1.* Si ambos números son iguales, debe devolver un 0.** Comprueba la relación entre los números: '5 y 10', '10 y 5' y '5 y 5'**
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**4) Realiza una función llamada intermedio() que a partir de dos números, devuelva su punto intermedio:***Nota: El número intermedio de dos números corresponde a la suma de los dos números dividida entre 2*** Comprueba el punto intermedio entre -12 y 24**
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**5) Realiza una función llamada recortar() que reciba tres parámetros. El primero es el número a recortar, el segundo es el límite inferior y el tercero el límite superior. La función tendrá que cumplir lo siguiente:*** Devolver el límite inferior si el número es menor que éste* Devolver el límite superior si el número es mayor que éste.* Devolver el número sin cambios si no se supera ningún límite.** Comprueba el resultado de recortar 15 entre los límites 0 y 10**
###Code
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
**6) Realiza una función separar() que tome una lista de números enteros y devuelva dos listas ordenadas. La primera con los números pares, y la segunda con los números impares:**Por ejemplo: ```pythonpares, impares = separar([6,5,2,1,7])print(pares) valdría [2, 6]print(impares) valdría [1, 5, 7]```*Nota: Para ordenar una lista automáticamente puedes usar el método .sort().*
###Code
numeros = [-12, 84, 13, 20, -33, 101, 9]
# Completa el ejercicio aquí
###Output
_____no_output_____
###Markdown
Tema 06: Programación de funciones (Enunciados)*Nota: Estos ejercicios son optativos para hacer al final de la unidad y están pensados para apoyar tu aprendizaje*. **1) Realiza una función llamada area_rectangulo() que devuelva el área del rectangulo a partir de una base y una altura. Calcula el área de un rectángulo de 15 de base y 10 de altura.***Nota: El área de un rectángulo se obtiene al multiplicar la base por la altura.*
###Code
# Completa el ejercicio aquí
def area_rectangulo(base, altura):
return base * altura
resultado = area_rectangulo(15, 10)
print("El área de un rectangulo que tiene 15 de base y 10 de altura es la siguiente:", resultado)
###Output
El área de un rectangulo que tiene 15 de base y 10 de altura es la siguiente: 150
###Markdown
**2) Realiza una función llamada area_circulo() que devuelva el área de un círculo a partir de un radio. Calcula el área de un círculo de 5 de radio: **Nota: El área de un círculo se obtiene al elevar el radio a dos y multiplicando el resultado por el número pi. Puedes utilizar el valor 3.14159 como pi o importarlo del módulo math:```pythonimport mathprint(math.pi)> 3.1415...```
###Code
# Completa el ejercicio aquí
import math
def area_circulo(radio):
return (radio**2) * math.pi
resultado = area_circulo(radio = 5)
print("El área de un círculo con un radio de 5 es la siguiente:", resultado)
###Output
El área de un círculo con un radio de 5 es la siguiente: 78.53981633974483
###Markdown
**3) Realiza una función llamada relacion() que a partir de dos números cumpla lo siguiente**:* Si el primer número es mayor que el segundo, debe devolver 1.* Si el primer número es menor que el segundo, debe devolver -1.* Si ambos números son iguales, debe devolver un 0.** Comprueba la relación entre los números: '5 y 10', '10 y 5' y '5 y 5'**
###Code
# Completa el ejercicio aquí
def relacion(num1, num2):
result = 0
if num1 > num2:
result = 1
elif num2 > num1:
result = -1
return result
print("Entre 5 y 10:", relacion(5, 10))
print("Entre 10 y 5:", relacion(10, 5))
print("Entre 5 y 5:", relacion(5, 5))
###Output
Entre 5 y 10: -1
Entre 10 y 5: 1
Entre 5 y 5: 0
###Markdown
**4) Realiza una función llamada intermedio() que a partir de dos números, devuelva su punto intermedio:***Nota: El número intermedio de dos números corresponde a la suma de los dos números dividida entre 2*** Comprueba el punto intermedio entre -12 y 24**
###Code
# Completa el ejercicio aquí
def intermedio(num1, num2):
return (num1 + num2) / 2
print("El numero intermedio es el siguiente", intermedio(-12, 24))
###Output
El numero intermedio es el siguiente 6.0
###Markdown
**5) Realiza una función llamada recortar() que reciba tres parámetros. El primero es el número a recortar, el segundo es el límite inferior y el tercero el límite superior. La función tendrá que cumplir lo siguiente:*** Devolver el límite inferior si el número es menor que éste* Devolver el límite superior si el número es mayor que éste.* Devolver el número sin cambios si no se supera ningún límite.** Comprueba el resultado de recortar 15 entre los límites 0 y 10**
###Code
# Completa el ejercicio aquí
def recortar(num, limiteInferior, limiteSuperior):
if num < limiteInferior:
return limiteInferior
elif num > limiteSuperior:
return limiteSuperior
else:
return num
print("Recortar numero 15 entre limites 0 y 10: ", recortar(15, 0, 10))
###Output
Recortar numero 15 entre limites 0 y 10: 10
###Markdown
**6) Realiza una función separar() que tome una lista de números enteros y devuelva dos listas ordenadas. La primera con los números pares, y la segunda con los números impares:**Por ejemplo: ```pythonpares, impares = separar([6,5,2,1,7])print(pares) valdría [2, 6]print(impares) valdría [1, 5, 7]```*Nota: Para ordenar una lista automáticamente puedes usar el método .sort().*
###Code
numeros = [-12, 84, 13, 20, -33, 101, 9]
# Completa el ejercicio aquí
def separar(numeros):
pares = []
impares = []
numeros.sort()
for num in numeros:
if num % 2 == 0:
pares.append(num)
else:
impares.append(num)
return pares, impares
pares, impares = separar([6,5,2,1,7])
print(pares)
print(impares)
pares, impares = separar([1, 5, 7, 3, 9, 2, 0, 1, -45, -12, 34, 98, 290])
print(pares)
print(impares)
###Output
[2, 6]
[1, 5, 7]
[-12, 0, 2, 34, 98, 290]
[-45, 1, 1, 3, 5, 7, 9]
|
nb4a_3d_structuring_elements.ipynb | ###Markdown
Structuing elements
###Code
%%capture_png $cell_normal
arrray = scipy.ndimage.generate_binary_structure(3, 1)
plot_voxels(arrray)
%%capture_png $cell_normal
arrray = scipy.ndimage.generate_binary_structure(3, 2)
plot_voxels(arrray)
%%capture_png $cell_normal
arrray = scipy.ndimage.generate_binary_structure(3, 3)
plot_voxels(arrray)
%%capture_png $cell_normal
arrray = cube(5)
plot_voxels(arrray)
%%capture_png $cell_normal
arrray = octahedron(3)
plot_voxels(arrray)
%%capture_png $cell_normal
arrray = ball(3)
plot_voxels(arrray)
%%capture_png $cell_normal
import matplotlib.pyplot as plt
import numpy as np
grids = 2
boxs = 5
voxelarray = np.zeros((boxs * grids, boxs * grids, boxs * grids))
i = 1
for xi in range(0, 2):
for yi in range(0, 2):
for zi in range(0, 2):
voxelarray[
xi * boxs : xi * boxs + boxs,
yi * boxs : yi * boxs + boxs,
zi * boxs : zi * boxs + boxs,
] = i
i += 1
voxelarray = np.uint8(voxelarray * 255 / i)
plot_voxels(voxelarray)
%%capture_png $cell_normal
voxelarray = data.binary_blobs(length=110, volume_fraction=0.6, n_dim=3, seed=9)
voxelarray = voxelarray[90:, 90:, 90:]
# plt.imshow(voxelarray[:,:,3])
plot_voxels(voxelarray, linewidth=0.1)
###Output
_____no_output_____ |
Week-1/CS6501_Lab_1_1.ipynb | ###Markdown
**Artificial Intelligence - MSc**CS6501 - MACHINE LEARNING AND APPLICATIONS Instructor: Enrique NaredoCS6501_Lab-0.1 Python Basics Markdown Examples This is **bold**. This is *italic*. This is ~strikethrough~. Mathematical Equations$\sqrt{3x-1}+(1+x)^2$$e^x=\sum_{i=0}^\infty \frac{1}{i!}x^i$ Python Comments
###Code
# This is a single line comment
'''
THIS IS A MULTILINE COMMENT
USING STRING LITERALS!
'''
"""
This is a comment
written in
more than just one line
"""
###Output
_____no_output_____
###Markdown
Python Indentation
###Code
## Good indentation
if 10 > 5:
print("Ten is greater than five!")
## Wrong indentation
# add blank spaces to get the right indentation
if 10 > 5:
print("Ten is greater than five!")
###Output
_____no_output_____
###Markdown
Python Variables
###Code
x = 10
y = "Hello, Friend!"
z = "Celtics"
# is the same as
z = 'Celtics'
a = 4
# is NOT the same as
A = 4
# Legal variable names
myvar = "Anything"
my_var = "Anything"
_my_var = "Anything"
myVar = "Anything"
MYVAR = "Anything"
myvar2 = "Anything"
# Camel Case
myVariableName = "Something"
# Pascal Case
MyVariableName = "Something"
# Snake Case
my_variable_name = "Something"
# Invent your own case
my_Variable-NAME = "Something"
# Many Values to Multiple Variables
alpha, beta, gamma = "Anything", "Something", "Whatever"
print(alpha)
print(beta)
print(gamma)
# One Value to Multiple Variables
alpha = beta = gamma = "Everything"
print(alpha)
print(beta)
print(gamma)
###Output
_____no_output_____ |
.ipynb_checkpoints/teste-checkpoint.ipynb | ###Markdown
New bussines location indicator Coursera project capstoneThis is the course captstone project for the coursera IBM machine learning specialization About the projectHaving a good location for your business is an important factor for it's prosperity. Although the hot spots in a city for a new venue are common knowledge, usually this places have a very high price given that it is correlated with higher demand. With some datamining and machine learning, some places cam be discovered as good opportunity’s that have not yet received enought attention from people on the business.This project uses python and some very helpfull libraries for machine learning, visualization and data colection, to give some insight on where you should open your business in a city, based on how "hot" the area is and how many competidors are on the same busines type, trying to maximize the fist and minimase the last. For the case study, the city chosen was Piracicaba, Brazil and the business type is a coffe shop, to the goal was to find a good place for a new coffe shop that has good location but not many competidors. To achive this objective we will start by ranking the places in city by how 'hot' they are for business and them finding what is the optimal place for a new business that does not collide with other similar business by location but also is on a "hot spot" in the city. The data on the amount of venues by location will be collected using the foursquare api for a given city. The app/notebook will be created in a generic way so that people using it can change the city name. A map of the city based on the amount of venues will be created and also another one for the similar venues as the person is looking to create.A score for a given location that will be created by adding value for number of venues using mahalanobis distance as a weight and the using the opposite (decreasing score) by having similar venues.In the end, points will be clustered together to have useful locations as a suggestion for the new business to be opened. Structure:This notebook is 6 parts: 1. Interest area organization. In this part, we will select the city and area of search as well as basic visualization, libraries imports and api/requests setup. 2. Data Gattering Using the fourshared api, we will get the data we need. 3. Data organization and cleanup Here we will treat the received data to be able to use it for visualization and the final machine learnig stage 4. Clustering We will find how many groups of coffe shop are there in the city by location and explore it also for the general venues. 5. Ranking Two scores, ranging from 0 to 100 will be created, and all positions on the city map will recieve this scores, one for ammount of venues (high is better - more general venues) and one for ammount of competition (high is worst - more coffe shops around). Let's start by importing the libraries we will need
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import folium, requests, os, time, itertools, pickle
import folium.plugins as plugins
from bs4 import BeautifulSoup
from urllib.request import urlopen
from geopy.geocoders import Nominatim
from IPython.display import Image
###Output
_____no_output_____
###Markdown
1. Interest area organization Defining the place of interest and search area
###Code
# this will be our center point in the city
lat_city_center, lng_city_center = -22.727482, -47.648811
# now, create a map with this lat and lng info
map_city = folium.Map(location=[lat_city_center, lng_city_center], zoom_start=14)
folium.CircleMarker(
[lat_city_center, lng_city_center],
radius=10,
color='green',
fill=True,
fill_color='green',
fill_opacity=0.7,
parse_html=False).add_to(map_city)
# check if it is correct
map_city
# Defining the search grid size and location
number_x_points = 10
number_y_points = 10
lat_city_center, lng_city_center = -22.727482, -47.648811
# the farthest point of interest in the map
x_max, y_max = -22.713566, -47.659758
# now we create a distance range in lat and lng distance measure
lat_delta = 2*np.abs(x_max - lat_city_center)
lng_delta = 2*np.abs(y_max - lng_city_center)
# create the matrix of points for use in the map and foursquare
lat_range = np.linspace(lat_city_center - lat_delta, lat_city_center + lat_delta, number_x_points)
lng_range = np.linspace(lng_city_center - lng_delta, lng_city_center + lng_delta, number_y_points)
lat_range
for lat in lat_range:
for lng in lng_range:
folium.CircleMarker(
[lat, lng],
radius=10,
color='red',
fill=True,
fill_color='red',
fill_opacity=0.7,
parse_html=False).add_to(map_city)
map_city
###Output
_____no_output_____
###Markdown
whe can see that we have covered most of the city with every point at around 200m from each other 2. Data Gattering Requesting the data from the location with foursquare api
###Code
# Client ID and cliet secret key shoud never be stored in the notebook or other script,
# so we read it from os enviroment variable.
VERSION = '20180605' # Foursquare API version
#CLIENT_ID = os.getenv('CLIENT_ID') # your Foursquare ID
#CLIENT_SECRET = os.getenv('CLIENT_SECRET') # your Foursquare Secret
###Output
_____no_output_____
###Markdown
Creating some help functions to get and treat the data received
###Code
def get_info(lat, lng):
'''
This function call foursquare api with given lattitude and longtude and
returns the api response as
'''
url = f'https://api.foursquare.com/v2/venues/search?&client_id={CLIENT_ID}&client_secret={CLIENT_SECRET}&v={VERSION}&ll={lat},{lng}&radius={250}&limit={500}'
results = requests.get(url).json()
if results['meta']['code'] == 200:
return results['response']
return False
def count_venues(response):
'''
This function parse the response received from the get_info() function
and treat it to filter only the number of venues in the area and retur it's number
and a list of lat and lng data in a list o lists format
'''
try:
points = []
if response['venues']:
for i in range(len(response['venues'])):
try:
lat = r['venues'][i]['location']['lat']
lng = r['venues'][i]['location']['lng']
points.append([lat, lng])
except:
pass
return len(response['venues']), points
except:
pass
return 0, None
def count_similar(response, similar=['Café', 'Cafe', 'Coffe', 'Coffee Shops']):
'''
This function parse the response received from the get_info() function
and count how many of then has similar text (eg: has the type we are looking for)
and retur it's number
'''
total = 0 # we start with 0 matchs
try:
points = []
for venue in response['venues']:
for item in venue['categories']:
if item['pluralName'] in similar or item['shortName'] in similar or item['name'] in similar:
total += 1
try:
lat = venue['location']['lat']
lng = venue['location']['lng']
points.append([lat, lng])
except:
pass
return total, points
except:
return 0, None
# Let's just check the functions created
r = get_info(lat_city_center, lng_city_center)
print(f'number of coffe shops: {count_similar(r)[0]}')
print(f'number of venues in the center of the city: {count_venues(r)[0]}')
print(r['venues'][0]['location']['lat'])
print(r['venues'][0])
###Output
_____no_output_____
###Markdown
Now, we will search in all the opoints in the map
###Code
# the flag bellow is to avoid excess use of the api, loading the data if it already exists
new_scrap = False
filename = 'foursquare_data.pk'
if new_scrap:
full_response, points_coffe, points_venue = [], [], []
for lat in lat_range:
line = []
for lng in lng_range:
r = get_info(lat, lng)
similar, pt_temp_cofee = count_similar(r)
venues, pt_temp_venue = count_venues(r)
if pt_temp_venue:
points_venue.extend(pt_temp_venue)
if pt_temp_cofee:
points_coffe.extend(pt_temp_cofee)
line.append((similar, venues))
time.sleep(1)
full_response.append(line)
outfile = open(filename, 'wb')
pickle.dump([full_response, points_coffe, points_venue], outfile)
outfile.close()
else:
infile = open(filename,'rb')
full_response, points_coffe, points_venue = pickle.load(infile)
infile.close()
# let's check what we got:
print(type(full_response),len(full_response), full_response[0])
###Output
<class 'list'> 10 [(0, 190), (0, 177), (0, 186), (0, 191), (0, 178), (0, 131), (0, 155), (0, 161), (0, 113), (0, 104)]
###Markdown
3. Data organization and cleanup Let's just see the data format and visualize the results
###Code
# to plot a heatmap using folium.plugins.HeatMap, we will generate a data in the expected format
venue_matrix = np.zeros([number_x_points*number_y_points, 3])
coffe_matrix = np.zeros([number_x_points*number_y_points, 3])
# flatten the full response for easy of use
full_response_flat = list(itertools.chain(*full_response))
# now, populate the matrix with the info from "full_response" list from foursquare
i = 0
for lat in lat_range:
for lng in lng_range:
coffe_matrix[i] = (lat, lng, full_response_flat[i][0])
venue_matrix[i] = (lat, lng, full_response_flat[i][1])
i += 1
# and do some data threatment and formating
max_venue = venue_matrix[:,2].max()
min_venue = venue_matrix[:,2].min()
venue_matrix_normalized = (venue_matrix[:,2] - min_venue)/(max_venue - min_venue)
###Output
_____no_output_____
###Markdown
To plot using folium heatmap, we will create a list of points in the expected format:
###Code
points_venue = []
for venue in venue_matrix:
for i in range(int((venue[2]/10))):
points_venue.append([venue[0], venue[1]])
points_coffe = []
for venue in coffe_matrix:
for i in range(int((venue[2]))):
points_coffe.append([venue[0], venue[1]])
m_venues = folium.Map(location=[lat_city_center, lng_city_center], zoom_start=14)
m_venues.add_child(folium.plugins.HeatMap(points_venue, radius=10, min_opacity=0.2, blur=8, control_scale=False))
m_venues
m_cofee = folium.Map(location=[lat_city_center, lng_city_center], zoom_start=14)
m_cofee.add_child(folium.plugins.HeatMap(points_coffe, radius=70, min_opacity=0.2, blur=35, control_scale=False))
m_cofee
###Output
_____no_output_____
###Markdown
if both graphs where not able to render properly (the are some known bugs no mozilla for the heamap generation *), here are the plots as jped images* https://github.com/python-visualization/folium/issues/812 4. Clustering We will use k-means and the Elbow Method to see the ammount of clusters for existing coffe shops and general venues
###Code
from sklearn.cluster import KMeans
def find_elbow_k(point):
sse = []
for k in range(1,10):
kmeans = KMeans(n_clusters=k)
kmeans.fit(points_coffe)
pred_clusters = kmeans.predict(point)
centroids = kmeans.cluster_centers_
curr_sse = 0
# calculate square of Euclidean distance of each point from its cluster center and add to current WSS
for i in range(len(points_coffe)):
curr_center = centroids[pred_clusters[i]]
curr_sse += (point[i][0] - curr_center[0]) ** 2 + (point[i][1] - curr_center[1]) ** 2
sse.append(curr_sse)
return sse
point = points_coffe
sse = find_elbow_k(point)
plt.plot(sse)
###Output
_____no_output_____
###Markdown
We can see that 4 or 5 blobs are a good start for places with coffe shops
###Code
kmeans = KMeans(n_clusters=5)
kmeans.fit(points_coffe)
pred_clusters = kmeans.predict(point)
centroids = kmeans.cluster_centers_
m_cofee2 = folium.Map(location=[lat_city_center, lng_city_center], zoom_start=14)
m_cofee2.add_child(folium.plugins.HeatMap(points_coffe, radius=70, min_opacity=0.2, blur=35, control_scale=False))
for centroid in centroids:
folium.CircleMarker(
centroid,
radius=10,
color='red',
fill=True,
fill_color='red',
fill_opacity=0.7,
parse_html=False).add_to(m_cofee2)
m_cofee2
point = points_venue
sse = find_elbow_k(point)
plt.plot(sse)
###Output
_____no_output_____
###Markdown
Testes do módulo de verificação de perfis metálicos
###Code
from material import *
from secao import *
from perfil_de_aco import *
from perfil_i_laminado import *
###Output
_____no_output_____
###Markdown
classe Material()
###Code
#Criando um material com as propriedades do aço A572
# E, poisson, fy, fu
A572 = Material(20000, 0.3, 34.5, 45)
#imprimindo os parâmetros da classe
print('modulo de elasticidade: ', A572.E, 'kgf/mm²')
print('modulo de cisalhamento: ', A572.G, 'kgf/mm²')
print('coeficiente de poisson: ', A572.poisson)
print('Tensão de escoamento: ', A572.fy, 'kgf/mm²')
print('Tensão de ruptura: ', A572.fu, 'kgf/mm²')
###Output
modulo de elasticidade: 20000 kgf/mm²
modulo de cisalhamento: 7692.307692307692 kgf/mm²
coeficiente de poisson: 0.3
Tensão de escoamento: 34.5 kgf/mm²
Tensão de ruptura: 45 kgf/mm²
###Markdown
classes Secao(), PerfilDeAco() e PerfilILaminado()
###Code
#Criando um instancia da classe PerfilILaminado() com as propriedades do perfil W530X74, com o aço
#A572 criado anteriomente
# nome do perfil, material
P_W530X74 = PerfilILaminado( 'W530X74', A572 )
#imprimindo as propriedades do perfil
print('Propriedades geométricas do perfil W530X74')
print('-------------------------------------------')
print('Altura total(ht): ', P_W530X74.ht, 'mm')
print('Altura da alma(hw): ', P_W530X74.hw, 'mm')
print('Distância entre as faces internas das mesas(h):', P_W530X74.ht, 'mm')
print('Largura da mesa(bf): ', P_W530X74.bf, 'mm')
print('Espessura da mesa(tf): ', P_W530X74.tf, 'mm')
print('Espessura da alma(tw):', P_W530X74.tw, 'mm')
print('Área Trasnversal(A):', P_W530X74.A, 'mm²')
print('Momento de inécicia em x (Ix):', P_W530X74.Ix, 'mm4')
print('Momento de inécicia em y (Iy):', P_W530X74.Iy, 'mm4')
print('Constante de torção (J):', P_W530X74.J,'mm4')
print('Raio de giração em x(rx): ', P_W530X74.rx, 'mm')
print('Raio de giração em y(ry): ', P_W530X74.ry, 'mm')
print('Módulo de resitência elástico em x (Wx): ', P_W530X74.Wx, 'mm³')
print('Módulo de resitência elástico em y (Wy): ', P_W530X74.Wy, 'mm³')
print('Módulo de resitência plástico em x (Zx): ', P_W530X74.Zx, 'mm³')
print('Módulo de resitência elástico em y (Zy): ', P_W530X74.Zy, 'mm³')
print('Constante de empenamento(Cw):', P_W530X74.Cw, 'mm6')
print('Cordenada X do centro de cisalhamento em relação ao Xcg (xo):', P_W530X74.xo, 'mm')
print('Cordenada Y do centro de cisalhamento em relação ao Ycg (yo):', P_W530X74.yo, 'mm')
print('Raio de giração em relação ao centro de corte (ro):', P_W530X74.ro, 'mm')
###Output
Propriedades geométricas do perfil W530X74
-------------------------------------------
Altura total(ht): 528.0 mm
Altura da alma(hw): 500.8 mm
Distância entre as faces internas das mesas(h): 528.0 mm
Largura da mesa(bf): 166.0 mm
Espessura da mesa(tf): 13.6 mm
Espessura da alma(tw): 9.65 mm
Área Trasnversal(A): 9480.0 mm²
Momento de inécicia em x (Ix): 410000000.0 mm4
Momento de inécicia em y (Iy): 10400000.0 mm4
Constante de torção (J): 475000.0 mm4
Raio de giração em x(rx): 207.96380730232684 mm
Raio de giração em y(ry): 33.121690981924665 mm
Módulo de resitência elástico em x (Wx): 1550000.0 mm³
Módulo de resitência elástico em y (Wy): 125000.0 mm³
Módulo de resitência plástico em x (Zx): 1800000.0 mm³
Módulo de resitência elástico em y (Zy): 200000.0 mm³
Constante de empenamento(Cw): 690000000000.0 mm6
Cordenada X do centro de cisalhamento em relação ao Xcg (xo): 0 mm
Cordenada Y do centro de cisalhamento em relação ao Ycg (yo): 0 mm
Raio de giração em relação ao centro de corte (ro): 210.58487970692823 mm
###Markdown
Algumas propriedades mecânicas- considerando uma barra com comprimentos de flambagem * klx = 3000 mm * kly = 3000 mm * klz = 3000 mm
###Code
#Comprimentos de flambagem
klx = 3000
kly = 3000
klz = 3000
print("Cargas criticas de flambagem")
print("----------------------------")
print('Nex:', P_W530X74.Nex(klx), 'kgf')
print('Ney:', P_W530X74.Ney(kly), 'kgf')
print('Nez', P_W530X74.Nez(klz), 'kgf')
print('Ne:', P_W530X74.Ne(klx, kly, klz), 'kgf')
#Indice de esbeltez
print('Indices de esbeltez da barra')
print('----------------------------')
print('Indice de esbeltez em relação ao giro no eixo X:', P_W530X74.indice_esbeltez_X(100))
print('Indice de esbeltez em relação ao giro no eixo Y:', P_W530X74.indice_esbeltez_Y(100))
###Output
Indices de esbeltez da barra
----------------------------
Indice de esbeltez em relação ao giro no eixo X: 3.0191695241215943
Indice de esbeltez em relação ao giro no eixo Y: 0.48085290078684345
###Markdown
Métodos de verificação da capacidade resistente Tração - escoamento da seção bruta
###Code
resb = P_W530X74.resist_esc_secao_bruta_NBR8800()
print('Resistência ao escoamento da seção bruta =', resb, 'kgf')
###Output
Resistência ao escoamento da seção bruta = 297327.2727272727 kgf
###Markdown
Compressão- considerando os comprimentos de flambagem indicados anteriormente
###Code
print('Ncrd = ', P_W530X74.Ncrd_NBR8800(klx, kly, klz), 'kgf')
print('\n Parametros de cálculo:')
print('-------------------------')
ier = P_W530X74.ind_esbeltez_reduzido(klx, kly, klz)
frc = P_W530X74.fator_reducao_compressao(ier)
print('Indice de esbeltez reduzido', ier )
print('Fator Chi:', frc)
print('Fator Q:', P_W530X74.fator_Q(frc))
###Output
Ncrd = 131445.0336998553 kgf
Parametros de cálculo:
-------------------------
Indice de esbeltez reduzido 1.3964851803036198
Fator Chi: 0.4420887209375675
Fator Q: 1.0
###Markdown
Cortante Em Y - maior inércia
###Code
print('Força reistênte de corte')
print('-----------------------')
print('Vrd_y: ', P_W530X74.Vrdy_NBR8800(), 'kgf')
print('\n Parametros de cálculo:')
print('-------------------------')
print('Awy: ', P_W530X74.Awy, ' mm²')
print('Vpl: ', P_W530X74.Vpl(P_W530X74.Awy), ' kgf')
print('kv: ', P_W530X74.kv_Vrdy())
print('Lambda_p: ', P_W530X74.par_esbeltez_limites_Vrd(P_W530X74.kv_Vrdy())[0])
print('Lambda_r: ', P_W530X74.par_esbeltez_limites_Vrd(P_W530X74.kv_Vrdy())[1])
###Output
Força reistênte de corte
-----------------------
Vrd_y: 95882.4 kgf
Parametros de cálculo:
-------------------------
Awy: 5095.2 mm²
Vpl: 105470.64 kgf
kv: 5
Lambda_p: 59.222009226398214
Lambda_r: 73.75832058196869
###Markdown
Em X - menor inércia
###Code
print('Força reistênte de corte')
print('-----------------------')
print('Vrd_x: ', P_W530X74.Vrdx_NBR8800(), 'kgf')
print('\n Parametros de cálculo:')
print('-------------------------')
print('Aw: ', P_W530X74.Awx, ' mm²')
print('Vpl: ', P_W530X74.Vpl(P_W530X74.Awx), ' kgf')
print('kv: ', P_W530X74.kv_Vrdx())
print('Lambda_p: ', P_W530X74.par_esbeltez_limites_Vrd(P_W530X74.kv_Vrdx())[0])
print('Lambda_r: ', P_W530X74.par_esbeltez_limites_Vrd(P_W530X74.kv_Vrdx())[1])
###Output
Força reistênte de corte
-----------------------
Vrd_x: 84967.85454545454 kgf
Parametros de cálculo:
-------------------------
Aw: 4515.2 mm²
Vpl: 93464.64 kgf
kv: 1.2
Lambda_p: 29.01274082941463
Lambda_r: 36.13404994208913
###Markdown
Momento fletor- considerando a barra contida em todo seu comprimento- coeficiente Cb = 1 Em X - eixo de maior inércia
###Code
print('Momento resistente de cálculo')
print('-----------------------------')
print('Mrd_x:', P_W530X74.Mrdx_NBR8800(0), 'kgf.mm')
print('\n Parametros de cálculo:')
print('-------------------------\n')
print('Mpl: ', P_W530X74.Mplx, 'kgf.mm \n')
print('ELU - Flambagem lateral com torção')
print('-----------------------------------')
print('Lambda_p: ', P_W530X74.par_esbeltez_limite_Mrdx_FLT()[0])
print('Lambda_r: ', P_W530X74.par_esbeltez_limite_Mrdx_FLT()[1])
print('Mr:', P_W530X74.Mrx_FLT(), 'kgf.mm')
print('Mcr:', P_W530X74.Mcrx_FLT(1, 1), 'kgf.mm')
print('Mn:', P_W530X74.Mnx_FLT(1, 1), 'kgf.mm')
print('\n')
print('ELU - Flambagem local da mesa')
print('-----------------------------------')
print('Lambda_p: ', P_W530X74.par_esbeltez_limite_Mrdx_FLM()[0])
print('Lambda_r: ', P_W530X74.par_esbeltez_limite_Mrdx_FLM()[1])
print('Mr:', P_W530X74.Mrx_FLM(), 'kgf.mm')
print('Mcr:', P_W530X74.Mcrx_FLM(), 'kgf.mm')
print('Mn:', P_W530X74.Mnx_FLM(), 'kgf.mm')
print('\n')
print('ELU - Flambagem local da alma')
print('-----------------------------------')
print('Lambda_p: ', P_W530X74.par_esbeltez_limite_Mrdx_FLA()[0])
print('Lambda_r: ', P_W530X74.par_esbeltez_limite_Mrdx_FLA()[1])
print('Mr:', P_W530X74.Mrx_FLA(), 'kgf.mm')
print('Mn:', P_W530X74.Mnx_FLA(), 'kgf.mm')
print('\n')
###Output
Momento resistente de cálculo
-----------------------------
Mrd_x: 56454545.45454545 kgf.mm
Parametros de cálculo:
-------------------------
Mpl: 62100000.0 kgf.mm
ELU - Flambagem lateral com torção
-----------------------------------
Lambda_p: 42.37582028619076
Lambda_r: 124.85369974978447
Mr: 37432500.0 kgf.mm
Mcr: 528775058423461.1 kgf.mm
Mn: 62100000.0 kgf.mm
ELU - Flambagem local da mesa
-----------------------------------
Lambda_p: 9.14932483451846
Lambda_r: 23.885510217361595
Mr: 37432500.0 kgf.mm
Mcr: 574291537.2332702 kgf.mm
Mn: 62100000.0 kgf.mm
ELU - Flambagem local da alma
-----------------------------------
Lambda_p: 90.53016152049844
Lambda_r: 137.2398725177769
Mr: 53475000.0 kgf.mm
Mn: 62100000.0 kgf.mm
###Markdown
Em Y - eixo de menor inércia
###Code
print('Momento resistente de cálculo')
print('-----------------------------')
print('Mrd_y:', P_W530X74.Mrdy_NBR8800(0), 'kgf.mm')
print('\n Parametros de cálculo:')
print('-------------------------\n')
print('Mpl: ', P_W530X74.Mply, 'kgf.mm \n')
print('ELU - Flambagem local da mesa')
print('-----------------------------------')
print('Lambda_p: ', P_W530X74.par_esbeltez_limite_Mrdy_FLM()[0])
print('Lambda_r: ', P_W530X74.par_esbeltez_limite_Mrdy_FLM()[1])
print('Mr:', P_W530X74.Mry_FLM(), 'kgf.mm')
print('Mcr:', P_W530X74.Mcry_FLM(), 'kgf.mm')
print('Mn:', P_W530X74.Mny_FLM(), 'kgf.mm')
print('\n')
###Output
Momento resistente de cálculo
-----------------------------
Mrd_y: 6272727.2727272725 kgf.mm
Parametros de cálculo:
-------------------------
Mpl: 6900000.0 kgf.mm
ELU - Flambagem local da mesa
-----------------------------------
Lambda_p: 9.14932483451846
Lambda_r: 23.885510217361595
Mr: 3018750.0 kgf.mm
Mcr: 46313833.647844374 kgf.mm
Mn: 6900000.0 kgf.mm
|
arl-python/examples/arl/imaging-mfs.ipynb | ###Markdown
MFS demonstration This script makes a fake data set and then deconvolves it. Finally the full and residual visibility are plotted.
###Code
%matplotlib inline
import os
import sys
import multiprocessing
sys.path.append(os.path.join('..', '..'))
results_dir = './results'
os.makedirs(results_dir, exist_ok=True)
from matplotlib import pylab
pylab.rcParams['figure.figsize'] = (10.0, 10.0)
pylab.rcParams['image.cmap'] = 'rainbow'
import numpy
from astropy.coordinates import SkyCoord
from astropy import units as u
from astropy import constants as const
from astropy.wcs.utils import pixel_to_skycoord
from matplotlib import pyplot as plt
from arl.data.polarisation import PolarisationFrame
from arl.visibility.base import create_visibility
from arl.skycomponent.operations import create_skycomponent
from arl.image.operations import show_image, export_image_to_fits, smooth_image, \
calculate_image_frequency_moments, calculate_image_from_frequency_moments
from arl.image.deconvolution import deconvolve_cube, restore_cube
from arl.image.iterators import image_raster_iter
from arl.image.solvers import solve_image
from arl.visibility.iterators import vis_timeslice_iter
from arl.util.testing_support import create_named_configuration, \
create_low_test_image_from_gleam, create_low_test_beam
from arl.imaging import *
from arl.imaging.weighting import weight_visibility
import logging
log = logging.getLogger()
log.setLevel(logging.DEBUG)
log.addHandler(logging.StreamHandler(sys.stdout))
###Output
_____no_output_____
###Markdown
Construct LOW configuration We create the visibility. This just makes the uvw, time, antenna1, antenna2, weight columns in a table
###Code
config = 'full'
if config == 'full':
low = create_named_configuration('LOWBD2')
b = 8e4
cellsize = 0.00001
npixel=5 * 2048
padding = 1
invert = invert_2d
predict = predict_2d
else:
low = create_named_configuration('LOWBD2-CORE')
b = 4e3
cellsize = 0.001
npixel=512
padding = 2
invert = invert_2d
predict = predict_2d
oversampling = 32
nchan = 7
frequency = numpy.linspace(0.8e8, 1.2e8, nchan)
centre_frequency = numpy.array([numpy.average(frequency)])
channel_bandwidth=numpy.array(nchan * [frequency[1]-frequency[0]])
total_bandwidth = numpy.array([numpy.sum(channel_bandwidth)])
times = numpy.linspace(-3, +3, 5) * numpy.pi / 12.0
log.info('Observing times %s' % (times))
log.info("Observing frequencies %s Hz" % (frequency))
log.info("Channel bandwidths %s Hz" % (channel_bandwidth))
log.info("Centre frequency %s Hz" % (centre_frequency))
log.info("Cellsize = %.6f radians" % (cellsize))
phasecentre = SkyCoord(ra=+15.0 * u.deg, dec=-35.0 * u.deg, frame='icrs', equinox='J2000')
vt = create_visibility(low, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre,
polarisation_frame=PolarisationFrame('stokesI'))
###Output
create_visibility: 4578560 rows, 0.478 GB
create_visibility: 4578560 rows, 0.478 GB
###Markdown
Plot the synthesized uv coverage
###Code
plt.clf()
plt.plot(vt.uvw[:,0], vt.uvw[:,1], '.', color='b')
plt.plot(-vt.uvw[:,0], -vt.uvw[:,1], '.', color='b')
plt.xlabel("U (wavelengths)")
plt.ylabel("V (wavelengths)")
plt.show()
###Output
_____no_output_____
###Markdown
Make a test image
###Code
model_centrechannel = create_low_test_image_from_gleam(npixel=npixel,
frequency=centre_frequency,
channel_bandwidth=total_bandwidth,
cellsize=cellsize,
phasecentre=phasecentre)
export_image_to_fits(model_centrechannel, '%s/imaging-mfs-model_centre_channel.fits' %
(results_dir))
model_multichannel = create_low_test_image_from_gleam(npixel=npixel, frequency=frequency,
channel_bandwidth=channel_bandwidth,
cellsize=cellsize,
phasecentre=phasecentre)
import time
start = time.time()
beam=create_low_test_beam(model_multichannel)
model_multichannel.data*=beam.data
print("Model * beam has %.3f Jy" % (numpy.sum(model_multichannel.data[0,0,:,:])))
cmodel = smooth_image(model_multichannel)
show_image(cmodel)
plt.title("Smoothed model image")
plt.show()
export_image_to_fits(cmodel, '%s/imaging-mfs-cmodel.fits' % (results_dir))
beam = None
cmodel = None
stop = time.time()
print('beam time:', stop - start)
export_image_to_fits(model_multichannel, '%s/imaging-mfs-multi_channel.fits' % (results_dir))
moment_cube = calculate_image_frequency_moments(model_multichannel,nmoments=3)
export_image_to_fits(moment_cube, '%s/imaging-mfs-moment_cube.fits' % (results_dir))
reconstructed_cube = calculate_image_from_frequency_moments(model_multichannel, moment_cube)
export_image_to_fits(reconstructed_cube, '%s/imaging-mfs-reconstructed_cube.fits' %
(results_dir))
vt.data['vis'] *= 0.0
vt = predict(vt, model_multichannel)
# To check that we got the prediction right, plot the amplitude of the visibility.
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis']), '.')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.show()
###Output
_____no_output_____
###Markdown
Weight the data
###Code
vt, density, densitygrid = weight_visibility(vt, model_centrechannel)
plt.clf()
plt.semilogy(uvdist, density, '.')
plt.xlabel('uvdist')
plt.ylabel('Sample density')
plt.show()
density = None
densitygrid = None
###Output
_____no_output_____
###Markdown
Make the dirty image and point spread function
###Code
dirty, sumwt = invert(vt, model_multichannel, padding=1)
show_image(dirty)
psf, sumwt = invert(vt, model_multichannel, dopsf=True, padding=1)
print("Max, min in dirty image = %.6f, %.6f, sumwt = %s" %
(dirty.data.max(), dirty.data.min(), sumwt))
print("Max, min in PSF = %.6f, %.6f, sumwt = %s" %
(psf.data.max(), psf.data.min(), sumwt))
export_image_to_fits(dirty, '%s/imaging-mfs-dirty.fits' % (results_dir))
export_image_to_fits(psf, '%s/imaging-mfs-psf.fits' % (results_dir))
comp, residual = deconvolve_cube(dirty, psf, niter=1000, gain=0.7, algorithm='msmfsclean',
scales=[0, 3, 10, 30], threshold=0.01, fractional_threshold=0.001, nmoments=3)
export_image_to_fits(comp, '%s/imaging-mfs-comp.fits' % (results_dir))
clean = restore_cube(model=comp, psf=psf, residual=residual)
export_image_to_fits(residual, '%s/imaging-mfs-residual.fits' % (results_dir))
export_image_to_fits(clean, '%s/imaging-mfs-clean.fits' % (results_dir))
show_image(clean)
plt.show()
###Output
_____no_output_____
###Markdown
Predict the visibility of the model
###Code
vtmodel = create_visibility(low, times, frequency, channel_bandwidth=channel_bandwidth,
weight=1.0, phasecentre=phasecentre,
polarisation_frame=PolarisationFrame('stokesI'))
vtmodel=predict(vtmodel, comp)
###Output
_____no_output_____
###Markdown
Now we will plot the original visibility and the residual visibility.
###Code
uvdist=numpy.sqrt(vt.data['uvw'][:,0]**2+vt.data['uvw'][:,1]**2)
plt.clf()
plt.plot(uvdist, numpy.abs(vt.data['vis']), '.', color='b', label='Original')
plt.plot(uvdist, numpy.abs(vt.data['vis']-vtmodel.data['vis']), '.', color='r',
label='Residual')
plt.xlabel('uvdist')
plt.ylabel('Amp Visibility')
plt.legend()
plt.show()
###Output
_____no_output_____ |
2-Working-With-Data/07-python/notebook-covidspread.ipynb | ###Markdown
Estimation of COVID-19 Pandemic
Loading Data
We will use data on COVID-19 infected individuals, provided by the [Center for Systems Science and Engineering](https://systems.jhu.edu/) (CSSE) at [Johns Hopkins University](https://jhu.edu/). Dataset is available in [this GitHub Repository](https://github.com/CSSEGISandData/COVID-19).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10,3) # make figures larger
###Output
_____no_output_____
###Markdown
We can load the most recent data directly from GitHub using `pd.read_csv`. If for some reason the data is not available, you can always use the copy available locally in the `data` folder - just uncomment the line below that defines `base_url`:
###Code
base_url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/" # loading from Internet
# base_url = "../../data/COVID/" # loading from disk
infected_dataset_url = base_url + "time_series_covid19_confirmed_global.csv"
recovered_dataset_url = base_url + "time_series_covid19_recovered_global.csv"
deaths_dataset_url = base_url + "time_series_covid19_deaths_global.csv"
countries_dataset_url = base_url + "../UID_ISO_FIPS_LookUp_Table.csv"
###Output
_____no_output_____
###Markdown
Let's now load the data for infected individuals and see how the data looks like:
###Code
infected = pd.read_csv(infected_dataset_url)
infected.head()
###Output
_____no_output_____
###Markdown
We can see that each row of the table defines the number of infected individuals for each country and/or province, and columns correspond to dates. Similar tables can be loaded for other data, such as number of recovered and number of deaths.
###Code
recovered = pd.read_csv(recovered_dataset_url)
deaths = pd.read_csv(deaths_dataset_url)
###Output
_____no_output_____
###Markdown
Making Sense of the Data
From the table above the role of province column is not clear. Let's see the different values that are present in `Province/State` column:
###Code
infected['Province/State'].value_counts()
###Output
_____no_output_____
###Markdown
From the names we can deduce that countries like Australia and China have more detailed breakdown by provinces. Let's look for information on China to see the example:
###Code
infected[infected['Country/Region']=='China']
###Output
_____no_output_____
###Markdown
Pre-processing the Data
We are not interested in breaking countries down to further territories, thus we would first get rid of this breakdown and add information on all territories together, to get info for the whole country. This can be done using `groupby`:
###Code
infected = infected.groupby('Country/Region').sum()
recovered = recovered.groupby('Country/Region').sum()
deaths = deaths.groupby('Country/Region').sum()
infected.head()
###Output
_____no_output_____
###Markdown
You can see that due to using `groupby` all DataFrames are now indexed by Country/Region. We can thus access the data for a specific country by using `.loc`:|
###Code
infected.loc['US'][2:].plot()
recovered.loc['US'][2:].plot()
plt.show()
###Output
_____no_output_____
###Markdown
> **Note** how we use `[2:]` to remove first two elements of a sequence that contain geolocation of a country. We can also drop those two columns altogether:
###Code
infected.drop(columns=['Lat','Long'],inplace=True)
recovered.drop(columns=['Lat','Long'],inplace=True)
deaths.drop(columns=['Lat','Long'],inplace=True)
###Output
_____no_output_____
###Markdown
Investigating the Data
Let's now switch to investigating a specific country. Let's create a frame that contains the data on infections indexed by date:
###Code
def mkframe(country):
df = pd.DataFrame({ 'infected' : infected.loc[country] ,
'recovered' : recovered.loc[country],
'deaths' : deaths.loc[country]})
df.index = pd.to_datetime(df.index)
return df
df = mkframe('US')
df
df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Now let's compute the number of new infected people each day. This will allow us to see the speed at which pandemic progresses. The easiest day to do it is to use `diff`:
###Code
df['ninfected'] = df['infected'].diff()
df['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
We can see high fluctuations in data. Let's look closer at one of the months:
###Code
df[(df.index.year==2020) & (df.index.month==7)]['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
It clearly looks like there are weekly fluctuations in data. Because we want to be able to see the trends, it makes sense to smooth out the curve by computing running average (i.e. for each day we will compute the average value of the previous several days):
###Code
df['ninfav'] = df['ninfected'].rolling(window=7).mean()
df['ninfav'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
In order to be able to compare several countries, we might want to take the country's population into account, and compare the percentage of infected individuals with respect to country's population. In order to get country's population, let's load the dataset of countries:
###Code
countries = pd.read_csv(countries_dataset_url)
countries
###Output
_____no_output_____
###Markdown
Because this dataset contains information on both countries and provinces, to get the population of the whole country we need to be a little bit clever:
###Code
countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]
pop = countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]['Population'].iloc[0]
df['pinfected'] = df['infected']*100 / pop
df['pinfected'].plot(figsize=(10,3))
plt.show()
###Output
_____no_output_____
###Markdown
Computing $R_t$
To see how infectuous is the disease, we look at the **basic reproduction number** $R_0$, which indicated the number of people that an infected person would further infect. When $R_0$ is more than 1, the epidemic is likely to spread.
$R_0$ is a property of the disease itself, and does not take into account some protective measures that people may take to slow down the pandemic. During the pandemic progression, we can estimate the reproduction number $R_t$ at any given time $t$. It has been shown that this number can be roughly estimated by taking a window of 8 days, and computing $$R_t=\frac{I_{t-7}+I_{t-6}+I_{t-5}+I_{t-4}}{I_{t-3}+I_{t-2}+I_{t-1}+I_t}$$
where $I_t$ is the number of newly infected individuals on day $t$.
Let's compute $R_t$ for our pandemic data. To do this, we will take a rolling window of 8 `ninfected` values, and apply the function to compute the ratio above:
###Code
df['Rt'] = df['ninfected'].rolling(8).apply(lambda x: x[4:].sum()/x[:4].sum())
df['Rt'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
You can see that there are some gaps in the graph. Those can be caused by either `NaN`, if `inf` values being present in the dataset. `inf` may be caused by division by 0, and `NaN` can indicate missing data, or no data available to compute the result (like in the very beginning of our frame, where rolling window of width 8 is not yet available). To make the graph nicer, we need to fill those values using `replace` and `fillna` function.
Let's further look at the beginning of the pandemic. We will also limit the y-axis values to show only values below 6, in order to see better, and draw horizontal line at 1.
###Code
ax = df[df.index<"2020-05-01"]['Rt'].replace(np.inf,np.nan).fillna(method='pad').plot(figsize=(10,3))
ax.set_ylim([0,6])
ax.axhline(1,linestyle='--',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Another interesting indicator of the pandemic is the **derivative**, or **daily difference** in new cases. It allows us to see clearly when pandemic is increasing or declining.
###Code
df['ninfected'].diff().plot()
plt.show()
###Output
_____no_output_____
###Markdown
Given the fact that there are a lot of fluctuations in data caused by reporting, it makes sense to smooth the curve by running rolling average to get the overall picture. Let's again focus on the first months of the pandemic:
###Code
ax=df[df.index<"2020-06-01"]['ninfected'].diff().rolling(7).mean().plot()
ax.axhline(0,linestyle='-.',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Estimation of COVID-19 Pandemic Loading DataWe will use data on COVID-19 infected individuals, provided by the [Center for Systems Science and Engineering](https://systems.jhu.edu/) (CSSE) at [Johns Hopkins University](https://jhu.edu/). Dataset is available in [this GitHub Repository](https://github.com/CSSEGISandData/COVID-19).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10,3) # make figures larger
###Output
_____no_output_____
###Markdown
We can load the most recent data directly from GitHub using `pd.read_csv`. If for some reason the data is not available, you can always use the copy available locally in the `data` folder - just uncomment the line below that defines `base_url`:
###Code
base_url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/" # loading from Internet
# base_url = "../../data/COVID/" # loading from disk
infected_dataset_url = base_url + "time_series_covid19_confirmed_global.csv"
recovered_dataset_url = base_url + "time_series_covid19_recovered_global.csv"
deaths_dataset_url = base_url + "time_series_covid19_deaths_global.csv"
countries_dataset_url = base_url + "../UID_ISO_FIPS_LookUp_Table.csv"
###Output
_____no_output_____
###Markdown
Let's now load the data for infected individuals and see how the data looks like:
###Code
infected = pd.read_csv(infected_dataset_url)
infected.head()
###Output
_____no_output_____
###Markdown
We can see that each row of the table defines the number of infected individuals for each country and/or province, and columns correspond to dates. Similar tables can be loaded for other data, such as number of recovered and number of deaths.
###Code
recovered = pd.read_csv(recovered_dataset_url)
deaths = pd.read_csv(deaths_dataset_url)
###Output
_____no_output_____
###Markdown
Making Sense of the DataFrom the table above the role of province column is not clear. Let's see the different values that are present in `Province/State` column:
###Code
infected['Province/State'].value_counts()
###Output
_____no_output_____
###Markdown
From the names we can deduce that countries like Australia and China have more detailed breakdown by provinces. Let's look for information on China to see the example:
###Code
infected[infected['Country/Region']=='China']
###Output
_____no_output_____
###Markdown
Pre-processing the Data We are not interested in breaking countries down to further territories, thus we would first get rid of this breakdown and add information on all territories together, to get info for the whole country. This can be done using `groupby`:
###Code
infected = infected.groupby('Country/Region').sum()
recovered = recovered.groupby('Country/Region').sum()
deaths = deaths.groupby('Country/Region').sum()
infected.head()
###Output
_____no_output_____
###Markdown
You can see that due to using `groupby` all DataFrames are now indexed by Country/Region. We can thus access the data for a specific country by using `.loc`:|
###Code
infected.loc['US'][2:].plot()
recovered.loc['US'][2:].plot()
plt.show()
###Output
_____no_output_____
###Markdown
> **Note** how we use `[2:]` to remove first two elements of a sequence that contain geolocation of a country. We can also drop those two columns altogether:
###Code
infected.drop(columns=['Lat','Long'],inplace=True)
recovered.drop(columns=['Lat','Long'],inplace=True)
deaths.drop(columns=['Lat','Long'],inplace=True)
###Output
_____no_output_____
###Markdown
Investigating the DataLet's now switch to investigating a specific country. Let's create a frame that contains the data on infections indexed by date:
###Code
def mkframe(country):
df = pd.DataFrame({ 'infected' : infected.loc[country] ,
'recovered' : recovered.loc[country],
'deaths' : deaths.loc[country]})
df.index = pd.to_datetime(df.index)
return df
df = mkframe('US')
df
df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Now let's compute the number of new infected people each day. This will allow us to see the speed at which pandemic progresses. The easiest day to do it is to use `diff`:
###Code
df['ninfected'] = df['infected'].diff()
df['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
We can see high fluctuations in data. Let's look closer at one of the months:
###Code
df[(df.index.year==2020) & (df.index.month==7)]['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
It clearly looks like there are weekly fluctuations in data. Because we want to be able to see the trends, it makes sense to smooth out the curve by computing running average (i.e. for each day we will compute the average value of the previous several days):
###Code
df['ninfav'] = df['ninfected'].rolling(window=7).mean()
df['ninfav'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
In order to be able to compare several countries, we might want to take the country's population into account, and compare the percentage of infected individuals with respect to country's population. In order to get country's population, let's load the dataset of countries:
###Code
countries = pd.read_csv(countries_dataset_url)
countries
###Output
_____no_output_____
###Markdown
Because this dataset contains information on both countries and provinces, to get the population of the whole country we need to be a little bit clever:
###Code
countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]
pop = countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]['Population'].iloc[0]
df['pinfected'] = df['infected']*100 / pop
df['pinfected'].plot(figsize=(10,3))
plt.show()
###Output
_____no_output_____
###Markdown
Computing $R_t$To see how infectious is the disease, we look at the **basic reproduction number** $R_0$, which indicated the number of people that an infected person would further infect. When $R_0$ is more than 1, the epidemic is likely to spread.$R_0$ is a property of the disease itself, and does not take into account some protective measures that people may take to slow down the pandemic. During the pandemic progression, we can estimate the reproduction number $R_t$ at any given time $t$. It has been shown that this number can be roughly estimated by taking a window of 8 days, and computing $$R_t=\frac{I_{t-7}+I_{t-6}+I_{t-5}+I_{t-4}}{I_{t-3}+I_{t-2}+I_{t-1}+I_t}$$where $I_t$ is the number of newly infected individuals on day $t$.Let's compute $R_t$ for our pandemic data. To do this, we will take a rolling window of 8 `ninfected` values, and apply the function to compute the ratio above:
###Code
df['Rt'] = df['ninfected'].rolling(8).apply(lambda x: x[4:].sum()/x[:4].sum())
df['Rt'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
You can see that there are some gaps in the graph. Those can be caused by either `NaN`, if `inf` values being present in the dataset. `inf` may be caused by division by 0, and `NaN` can indicate missing data, or no data available to compute the result (like in the very beginning of our frame, where rolling window of width 8 is not yet available). To make the graph nicer, we need to fill those values using `replace` and `fillna` function.Let's further look at the beginning of the pandemic. We will also limit the y-axis values to show only values below 6, in order to see better, and draw horizontal line at 1.
###Code
ax = df[df.index<"2020-05-01"]['Rt'].replace(np.inf,np.nan).fillna(method='pad').plot(figsize=(10,3))
ax.set_ylim([0,6])
ax.axhline(1,linestyle='--',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Another interesting indicator of the pandemic is the **derivative**, or **daily difference** in new cases. It allows us to see clearly when pandemic is increasing or declining.
###Code
df['ninfected'].diff().plot()
plt.show()
###Output
_____no_output_____
###Markdown
Given the fact that there are a lot of fluctuations in data caused by reporting, it makes sense to smooth the curve by running rolling average to get the overall picture. Let's again focus on the first months of the pandemic:
###Code
ax=df[df.index<"2020-06-01"]['ninfected'].diff().rolling(7).mean().plot()
ax.axhline(0,linestyle='-.',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Estimation of COVID-19 Pandemic Loading DataWe will use data on COVID-19 infected individuals, provided by the [Center for Systems Science and Engineering](https://systems.jhu.edu/) (CSSE) at [Johns Hopkins University](https://jhu.edu/). Dataset is available in [this GitHub Repository](https://github.com/CSSEGISandData/COVID-19).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10,3) # make figures larger
###Output
_____no_output_____
###Markdown
We can load the most recent data directly from GitHub using `pd.read_csv`. If for some reason the data is not available, you can always use the copy available locally in the `data` folder - just uncomment the line below that defines `base_url`:
###Code
# base_url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/" # loading from Internet
base_url = "../../data/COVID/" # loading from disk
infected_dataset_url = base_url + "time_series_covid19_confirmed_global.csv"
recovered_dataset_url = base_url + "time_series_covid19_recovered_global.csv"
deaths_dataset_url = base_url + "time_series_covid19_deaths_global.csv"
countries_dataset_url = base_url + "../UID_ISO_FIPS_LookUp_Table.csv"
###Output
_____no_output_____
###Markdown
Let's now load the data for infected individuals and see how the data looks like:
###Code
infected = pd.read_csv(infected_dataset_url)
infected.head()
###Output
_____no_output_____
###Markdown
We can see that each row of the table defines the number of infected individuals for each country and/or province, and columns correspond to dates. Similar tables can be loaded for other data, such as number of recovered and number of deaths.
###Code
recovered = pd.read_csv(recovered_dataset_url)
deaths = pd.read_csv(deaths_dataset_url)
###Output
_____no_output_____
###Markdown
Making Sense of the DataFrom the table above the role of province column is not clear. Let's see the different values that are present in `Province/State` column:
###Code
infected['Province/State'].value_counts()
###Output
_____no_output_____
###Markdown
From the names we can deduce that countries like Australia and China have more detailed breakdown by provinces. Let's look for information on China to see the example:
###Code
infected[infected['Country/Region']=='China']
###Output
_____no_output_____
###Markdown
Pre-processing the Data We are not interested in breaking countries down to further territories, thus we would first get rid of this breakdown and add information on all territories together, to get info for the whole country. This can be done using `groupby`:
###Code
infected = infected.groupby('Country/Region').sum()
recovered = recovered.groupby('Country/Region').sum()
deaths = deaths.groupby('Country/Region').sum()
infected.head()
###Output
_____no_output_____
###Markdown
You can see that due to using `groupby` all DataFrames are now indexed by Country/Region. We can thus access the data for a specific country by using `.loc`:|
###Code
infected.loc['US'][2:].plot()
recovered.loc['US'][2:].plot()
plt.show()
###Output
_____no_output_____
###Markdown
> **Note** how we use `[2:]` to remove first two elements of a sequence that contain geolocation of a country. We can also drop those two columns altogether:
###Code
infected.drop(columns=['Lat','Long'],inplace=True)
recovered.drop(columns=['Lat','Long'],inplace=True)
deaths.drop(columns=['Lat','Long'],inplace=True)
###Output
_____no_output_____
###Markdown
Investigating the DataLet's now switch to investigating a specific country. Let's create a frame that contains the data on infections indexed by date:
###Code
def mkframe(country):
df = pd.DataFrame({ 'infected' : infected.loc[country] ,
'recovered' : recovered.loc[country],
'deaths' : deaths.loc[country]})
df.index = pd.to_datetime(df.index)
return df
df = mkframe('US')
df
df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Now let's compute the number of new infected people each day. This will allow us to see the speed at which pandemic progresses. The easiest day to do it is to use `diff`:
###Code
df['ninfected'] = df['infected'].diff()
df['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
We can see high fluctuations in data. Let's look closer at one of the months:
###Code
df[(df.index.year==2020) & (df.index.month==7)]['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
It clearly looks like there are weekly fluctuations in data. Because we want to be able to see the trends, it makes sense to smooth out the curve by computing running average (i.e. for each day we will compute the average value of the previous several days):
###Code
df['ninfav'] = df['ninfected'].rolling(window=7).mean()
df['ninfav'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
In order to be able to compare several countries, we might want to take the country's population into account, and compare the percentage of infected individuals with respect to country's population. In order to get country's population, let's load the dataset of countries:
###Code
countries = pd.read_csv(countries_dataset_url)
countries
###Output
_____no_output_____
###Markdown
Because this dataset contains information on both countries and provinces, to get the population of the whole country we need to be a little bit clever:
###Code
countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]
pop = countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]['Population'].iloc[0]
df['pinfected'] = df['infected']*100 / pop
df['pinfected'].plot(figsize=(10,3))
plt.show()
###Output
_____no_output_____
###Markdown
Computing $R_t$To see how infectuous is the disease, we look at the **basic repoduction number** $R_0$, which indicated the number of people that an infected person would further infect. When $R_0$ is more than 1, the epidemic is likely to spread.$R_0$ is a property of the disease itself, and does not take into account some protective measures that people may take to slow down the pandemic. During the pandemic progression, we can estimate the reproduction number $R_t$ at any given time $t$. It has been shown that this number can be roughly estimated by taking a window of 8 days, and computing $$R_t=\frac{I_{t-7}+I_{t-6}+I_{t-5}+I_{t-4}}{I_{t-3}+I_{t-2}+I_{t-1}+I_t}$$where $I_t$ is the number of newly infected individuals on day $t$.Let's compute $R_t$ for our pandemic data. To do this, we will take a rolling window of 8 `ninfected` values, and apply the function to compute the ratio above:
###Code
df['Rt'] = df['ninfected'].rolling(8).apply(lambda x: x[4:].sum()/x[:4].sum())
df['Rt'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
You can see that there are some gaps in the graph. Those can be caused by either `NaN`, if `inf` values being present in the dataset. `inf` may be caused by division by 0, and `NaN` can indicate missing data, or no data available to compute the result (like in the very beginning of our frame, where rolling window of width 8 is not yet available). To make the graph nicer, we need to fill those values using `replace` and `fillna` function.Let's further look at the beginning of the pandemic. We will also limit the y-axis values to show only values below 6, in order to see better, and draw horizontal line at 1.
###Code
ax = df[df.index<"2020-05-01"]['Rt'].replace(np.inf,np.nan).fillna(method='pad').plot(figsize=(10,3))
ax.set_ylim([0,6])
ax.axhline(1,linestyle='--',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Another interesting indicator of the pandemic is the **derivative**, or **daily difference** in new cases. It allows us to see clearly when pandemic is increasing or declining.
###Code
df['ninfected'].diff().plot()
plt.show()
###Output
_____no_output_____
###Markdown
Given the fact that there are a lot of fluctuations in data caused by reporting, it makes sense to smooth the curve by running rolling average to get the overall picture. Let's again focus on the first months of the pandemic:
###Code
ax=df[df.index<"2020-06-01"]['ninfected'].diff().rolling(7).mean().plot()
ax.axhline(0,linestyle='-.',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
ChallengeNow it is time for you to play more with the code and data! Here are a few suggestions you can experiment with:* See the spread of the pandemic in different countries.* Plot $R_t$ graphs for several countries on one plot for comparison, or make several plots side-by-side* See how the number of deaths and recoveries correlate with number of infected cases.* Try to find out how long a typical disease lasts by visually correlating infection rate and deaths rate and looking for some anomalies. You may need to look at different countries to find that out.* Calculate the fatality rate and how it changes over time. You may want to take into account the length of the disease in days to shift one time series before doing calculations
###Code
china = mkframe('China')
france = mkframe('France')
china['new_infected'] = china['infected'].diff().plot()
france['new_infected'] = france['infected'].diff().plot(color='red')
plt.show()
def add_Rt(df):
df['ninfected'] = df['infected'].diff()
df['Rt'] = df['ninfected'].rolling(8).apply(lambda x: x[4:].sum()/x[:4].sum())
china = mkframe('China')
france = mkframe('France')
US = mkframe('US')
UK = mkframe('United Kingdom')
countries = [china,france,US,UK]
color = ['red','blue','green','yellow']
for i,country in enumerate(countries):
add_Rt(country)
country['Rt'].plot(color = color[i])
plt.show()
###Output
_____no_output_____
###Markdown
Estimation of COVID-19 Pandemic Loading DataWe will use data on COVID-19 infected individuals, provided by the [Center for Systems Science and Engineering](https://systems.jhu.edu/) (CSSE) at [Johns Hopkins University](https://jhu.edu/). Dataset is available in [this GitHub Repository](https://github.com/CSSEGISandData/COVID-19).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10,3) # make figures larger
###Output
_____no_output_____
###Markdown
We can load the most recent data directly from GitHub using `pd.read_csv`. If for some reason the data is not available, you can always use the copy available locally in the `data` folder - just uncomment the line below that defines `base_url`:
###Code
base_url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/" # loading from Internet
# base_url = "../../data/COVID/" # loading from disk
infected_dataset_url = base_url + "time_series_covid19_confirmed_global.csv"
recovered_dataset_url = base_url + "time_series_covid19_recovered_global.csv"
deaths_dataset_url = base_url + "time_series_covid19_deaths_global.csv"
countries_dataset_url = base_url + "../UID_ISO_FIPS_LookUp_Table.csv"
###Output
_____no_output_____
###Markdown
Let's now load the data for infected individuals and see how the data looks like:
###Code
infected = pd.read_csv(infected_dataset_url)
infected.head()
###Output
_____no_output_____
###Markdown
We can see that each row of the table defines the number of infected individuals for each country and/or province, and columns correspond to dates. Similar tables can be loaded for other data, such as number of recovered and number of deaths.
###Code
recovered = pd.read_csv(recovered_dataset_url)
deaths = pd.read_csv(deaths_dataset_url)
###Output
_____no_output_____
###Markdown
Making Sense of the DataFrom the table above the role of province column is not clear. Let's see the different values that are present in `Province/State` column:
###Code
infected['Province/State'].value_counts()
###Output
_____no_output_____
###Markdown
From the names we can deduce that countries like Australia and China have more detailed breakdown by provinces. Let's look for information on China to see the example:
###Code
infected[infected['Country/Region']=='China']
###Output
_____no_output_____
###Markdown
Pre-processing the Data We are not interested in breaking countries down to further territories, thus we would first get rid of this breakdown and add information on all territories together, to get info for the whole country. This can be done using `groupby`:
###Code
infected = infected.groupby('Country/Region').sum()
recovered = recovered.groupby('Country/Region').sum()
deaths = deaths.groupby('Country/Region').sum()
infected.head()
###Output
_____no_output_____
###Markdown
You can see that due to using `groupby` all DataFrames are now indexed by Country/Region. We can thus access the data for a specific country by using `.loc`:|
###Code
infected.loc['US'][2:].plot()
recovered.loc['US'][2:].plot()
plt.show()
###Output
_____no_output_____
###Markdown
> **Note** how we use `[2:]` to remove first two elements of a sequence that contain geolocation of a country. We can also drop those two columns altogether:
###Code
infected.drop(columns=['Lat','Long'],inplace=True)
recovered.drop(columns=['Lat','Long'],inplace=True)
deaths.drop(columns=['Lat','Long'],inplace=True)
###Output
_____no_output_____
###Markdown
Investigating the DataLet's now switch to investigating a specific country. Let's create a frame that contains the data on infections indexed by date:
###Code
def mkframe(country):
df = pd.DataFrame({ 'infected' : infected.loc[country] ,
'recovered' : recovered.loc[country],
'deaths' : deaths.loc[country]})
df.index = pd.to_datetime(df.index)
return df
df = mkframe('US')
df
df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Now let's compute the number of new infected people each day. This will allow us to see the speed at which pandemic progresses. The easiest day to do it is to use `diff`:
###Code
df['ninfected'] = df['infected'].diff()
df['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
We can see high fluctuations in data. Let's look closer at one of the months:
###Code
df[(df.index.year==2020) & (df.index.month==7)]['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
It clearly looks like there are weekly fluctuations in data. Because we want to be able to see the trends, it makes sense to smooth out the curve by computing running average (i.e. for each day we will compute the average value of the previous several days):
###Code
df['ninfav'] = df['ninfected'].rolling(window=7).mean()
df['ninfav'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
In order to be able to compare several countries, we might want to take the country's population into account, and compare the percentage of infected individuals with respect to country's population. In order to get country's population, let's load the dataset of countries:
###Code
countries = pd.read_csv(countries_dataset_url)
countries
###Output
_____no_output_____
###Markdown
Because this dataset contains information on both countries and provinces, to get the population of the whole country we need to be a little bit clever:
###Code
countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]
pop = countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]['Population'].iloc[0]
df['pinfected'] = df['infected']*100 / pop
df['pinfected'].plot(figsize=(10,3))
plt.show()
###Output
_____no_output_____
###Markdown
Computing $R_t$To see how infectious is the disease, we look at the **basic reproduction number** $R_0$, which indicated the number of people that an infected person would further infect. When $R_0$ is more than 1, the epidemic is likely to spread.$R_0$ is a property of the disease itself, and does not take into account some protective measures that people may take to slow down the pandemic. During the pandemic progression, we can estimate the reproduction number $R_t$ at any given time $t$. It has been shown that this number can be roughly estimated by taking a window of 8 days, and computing $$R_t=\frac{I_{t-7}+I_{t-6}+I_{t-5}+I_{t-4}}{I_{t-3}+I_{t-2}+I_{t-1}+I_t}$$where $I_t$ is the number of newly infected individuals on day $t$.Let's compute $R_t$ for our pandemic data. To do this, we will take a rolling window of 8 `ninfected` values, and apply the function to compute the ratio above:
###Code
df['Rt'] = df['ninfected'].rolling(8).apply(lambda x: x[4:].sum()/x[:4].sum())
df['Rt'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
You can see that there are some gaps in the graph. Those can be caused by either `NaN`, if `inf` values being present in the dataset. `inf` may be caused by division by 0, and `NaN` can indicate missing data, or no data available to compute the result (like in the very beginning of our frame, where rolling window of width 8 is not yet available). To make the graph nicer, we need to fill those values using `replace` and `fillna` function.Let's further look at the beginning of the pandemic. We will also limit the y-axis values to show only values below 6, in order to see better, and draw horizontal line at 1.
###Code
ax = df[df.index<"2020-05-01"]['Rt'].replace(np.inf,np.nan).fillna(method='pad').plot(figsize=(10,3))
ax.set_ylim([0,6])
ax.axhline(1,linestyle='--',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Another interesting indicator of the pandemic is the **derivative**, or **daily difference** in new cases. It allows us to see clearly when pandemic is increasing or declining.
###Code
df['ninfected'].diff().plot()
plt.show()
###Output
_____no_output_____
###Markdown
Given the fact that there are a lot of fluctuations in data caused by reporting, it makes sense to smooth the curve by running rolling average to get the overall picture. Let's again focus on the first months of the pandemic:
###Code
ax=df[df.index<"2020-06-01"]['ninfected'].diff().rolling(7).mean().plot()
ax.axhline(0,linestyle='-.',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Estimation of COVID-19 Pandemic
Loading Data
We will use data on COVID-19 infected individuals, provided by the [Center for Systems Science and Engineering](https://systems.jhu.edu/) (CSSE) at [Johns Hopkins University](https://jhu.edu/). Dataset is available in [this GitHub Repository](https://github.com/CSSEGISandData/COVID-19).
###Code
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
plt.rcParams["figure.figsize"] = (10,3) # make figures larger
###Output
_____no_output_____
###Markdown
We can load the most recent data directly from GitHub using `pd.read_csv`. If for some reason the data is not available, you can always use the copy available locally in the `data` folder - just uncomment the line below that defines `base_url`:
###Code
base_url = "https://raw.githubusercontent.com/CSSEGISandData/COVID-19/master/csse_covid_19_data/csse_covid_19_time_series/" # loading from Internet
# base_url = "../../data/COVID/" # loading from disk
infected_dataset_url = base_url + "time_series_covid19_confirmed_global.csv"
recovered_dataset_url = base_url + "time_series_covid19_recovered_global.csv"
deaths_dataset_url = base_url + "time_series_covid19_deaths_global.csv"
countries_dataset_url = base_url + "../UID_ISO_FIPS_LookUp_Table.csv"
###Output
_____no_output_____
###Markdown
Let's now load the data for infected individuals and see how the data looks like:
###Code
infected = pd.read_csv(infected_dataset_url)
infected.head()
###Output
_____no_output_____
###Markdown
We can see that each row of the table defines the number of infected individuals for each country and/or province, and columns correspond to dates. Similar tables can be loaded for other data, such as number of recovered and number of deaths.
###Code
recovered = pd.read_csv(recovered_dataset_url)
deaths = pd.read_csv(deaths_dataset_url)
###Output
_____no_output_____
###Markdown
Making Sense of the Data
From the table above the role of province column is not clear. Let's see the different values that are present in `Province/State` column:
###Code
infected['Province/State'].value_counts()
###Output
_____no_output_____
###Markdown
From the names we can deduce that countries like Australia and China have more detailed breakdown by provinces. Let's look for information on China to see the example:
###Code
infected[infected['Country/Region']=='China']
###Output
_____no_output_____
###Markdown
Pre-processing the Data
We are not interested in breaking countries down to further territories, thus we would first get rid of this breakdown and add information on all territories together, to get info for the whole country. This can be done using `groupby`:
###Code
infected = infected.groupby('Country/Region').sum()
recovered = recovered.groupby('Country/Region').sum()
deaths = deaths.groupby('Country/Region').sum()
infected.head()
###Output
_____no_output_____
###Markdown
You can see that due to using `groupby` all DataFrames are now indexed by Country/Region. We can thus access the data for a specific country by using `.loc`:|
###Code
infected.loc['US'][2:].plot()
recovered.loc['US'][2:].plot()
plt.show()
###Output
_____no_output_____
###Markdown
> **Note** how we use `[2:]` to remove first two elements of a sequence that contain geolocation of a country. We can also drop those two columns altogether:
###Code
infected.drop(columns=['Lat','Long'],inplace=True)
recovered.drop(columns=['Lat','Long'],inplace=True)
deaths.drop(columns=['Lat','Long'],inplace=True)
###Output
_____no_output_____
###Markdown
Investigating the Data
Let's now switch to investigating a specific country. Let's create a frame that contains the data on infections indexed by date:
###Code
def mkframe(country):
df = pd.DataFrame({ 'infected' : infected.loc[country] ,
'recovered' : recovered.loc[country],
'deaths' : deaths.loc[country]})
df.index = pd.to_datetime(df.index)
return df
df = mkframe('US')
df
df.plot()
plt.show()
###Output
_____no_output_____
###Markdown
Now let's compute the number of new infected people each day. This will allow us to see the speed at which pandemic progresses. The easiest day to do it is to use `diff`:
###Code
df['ninfected'] = df['infected'].diff()
df['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
We can see high fluctuations in data. Let's look closer at one of the months:
###Code
df[(df.index.year==2020) & (df.index.month==7)]['ninfected'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
It clearly looks like there are weekly fluctuations in data. Because we want to be able to see the trends, it makes sense to smooth out the curve by computing running average (i.e. for each day we will compute the average value of the previous several days):
###Code
df['ninfav'] = df['ninfected'].rolling(window=7).mean()
df['ninfav'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
In order to be able to compare several countries, we might want to take the country's population into account, and compare the percentage of infected individuals with respect to country's population. In order to get country's population, let's load the dataset of countries:
###Code
countries = pd.read_csv(countries_dataset_url)
countries
###Output
_____no_output_____
###Markdown
Because this dataset contains information on both countries and provinces, to get the population of the whole country we need to be a little bit clever:
###Code
countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]
pop = countries[(countries['Country_Region']=='US') & countries['Province_State'].isna()]['Population'].iloc[0]
df['pinfected'] = df['infected']*100 / pop
df['pinfected'].plot(figsize=(10,3))
plt.show()
###Output
_____no_output_____
###Markdown
Computing $R_t$
To see how infectuous is the disease, we look at the **basic repoduction number** $R_0$, which indicated the number of people that an infected person would further infect. When $R_0$ is more than 1, the epidemic is likely to spread.
$R_0$ is a property of the disease itself, and does not take into account some protective measures that people may take to slow down the pandemic. During the pandemic progression, we can estimate the reproduction number $R_t$ at any given time $t$. It has been shown that this number can be roughly estimated by taking a window of 8 days, and computing $$R_t=\frac{I_{t-7}+I_{t-6}+I_{t-5}+I_{t-4}}{I_{t-3}+I_{t-2}+I_{t-1}+I_t}$$
where $I_t$ is the number of newly infected individuals on day $t$.
Let's compute $R_t$ for our pandemic data. To do this, we will take a rolling window of 8 `ninfected` values, and apply the function to compute the ratio above:
###Code
df['Rt'] = df['ninfected'].rolling(8).apply(lambda x: x[4:].sum()/x[:4].sum())
df['Rt'].plot()
plt.show()
###Output
_____no_output_____
###Markdown
You can see that there are some gaps in the graph. Those can be caused by either `NaN`, if `inf` values being present in the dataset. `inf` may be caused by division by 0, and `NaN` can indicate missing data, or no data available to compute the result (like in the very beginning of our frame, where rolling window of width 8 is not yet available). To make the graph nicer, we need to fill those values using `replace` and `fillna` function.
Let's further look at the beginning of the pandemic. We will also limit the y-axis values to show only values below 6, in order to see better, and draw horizontal line at 1.
###Code
ax = df[df.index<"2020-05-01"]['Rt'].replace(np.inf,np.nan).fillna(method='pad').plot(figsize=(10,3))
ax.set_ylim([0,6])
ax.axhline(1,linestyle='--',color='red')
plt.show()
###Output
_____no_output_____
###Markdown
Another interesting indicator of the pandemic is the **derivative**, or **daily difference** in new cases. It allows us to see clearly when pandemic is increasing or declining.
###Code
df['ninfected'].diff().plot()
plt.show()
###Output
_____no_output_____
###Markdown
Given the fact that there are a lot of fluctuations in data caused by reporting, it makes sense to smooth the curve by running rolling average to get the overall picture. Let's again focus on the first months of the pandemic:
###Code
ax=df[df.index<"2020-06-01"]['ninfected'].diff().rolling(7).mean().plot()
ax.axhline(0,linestyle='-.',color='red')
plt.show()
###Output
_____no_output_____ |
IL_LTC_Data_Analysis-11-12.ipynb | ###Markdown
1 - Pull JSON File from Website
###Code
def pull_IL_json_from_web():
ltc_data = getResponse('http://www.dph.illinois.gov/sitefiles/COVIDLTC.json')
# Extract Reporting Data
reporting_date = '%d-%02d-%02d' %(ltc_data['LastUpdateDate']['year'], ltc_data['LastUpdateDate']['month'], ltc_data['LastUpdateDate']['day'])
#Saving a copy of source data
ltc_data_json = json.dumps(ltc_data)
file = "Source_data/IL_" + reporting_date + "_LTC_data_Source.json"
f = open(file, "w")
f.write(ltc_data_json)
f.close()
return file
# ltc_data = getResponse('http://www.dph.illinois.gov/sitefiles/COVIDLTC.json')
# # Extract Reporting Data
# reporting_date = '%d-%02d-%02d' %(ltc_data['LastUpdateDate']['year'], ltc_data['LastUpdateDate']['month'], ltc_data['LastUpdateDate']['day'])
# #Saving a copy of source data
# ltc_data_json = json.dumps(ltc_data)
# f = open("Source_data/IL_" + reporting_date + "_LTC_data_Source.json","w")
# f.write(ltc_data_json)
# f.close()
json_file = pull_IL_json_from_web()
with open(json_file) as f:
ltc_data = json.load(f)
# Extract Reporting Data
reporting_date = '%d-%02d-%02d' % (ltc_data['LastUpdateDate']['year'], ltc_data['LastUpdateDate']['month'], ltc_data['LastUpdateDate']['day'])
###Output
_____no_output_____
###Markdown
2 - Put Outbreak data in DataFrame and AugmentData is at the Outbreak level. A Facility can have 1 to Many Outbreaks (not sure about 0)
###Code
def outbreak_df_from_file(filename):
with open(filename) as f:
ltc_data = json.load(f)
# Extract Reporting Data
reporting_date = '%d-%02d-%02d' %(ltc_data['LastUpdateDate']['year'], ltc_data['LastUpdateDate']['month'], ltc_data['LastUpdateDate']['day'])
df = pd.DataFrame(ltc_data['FacilityValues'])
df['reporting_date'] = reporting_date
df['CFR'] = (df['deaths'] / df['confirmed_cases'])
df['outbreaks'] = 1 # to allow counting # of outbreaks by Facility
#Save Outbreak data to a file
outbreak_file = 'Reporting_data/IL_' + reporting_date + '_Outbreaks_LTC_data_v2.csv'
df.to_csv(outbreak_file, index = False)
df.sort_values(by='deaths', ascending=False).head(5)
###Output
_____no_output_____
###Markdown
3 - Print Summary Data
###Code
# Get summary data from feed - Note this may not match totals - ST-TODO: Check if summary data and totals from raw data match
deaths = ltc_data['LTC_Reported_Cases']['deaths']
confirmed_cases = ltc_data['LTC_Reported_Cases']['confirmed_cases']
print ('Date: %s' % reporting_date)
print ('Cases: %d' % confirmed_cases)
print ('Deaths: %d'% deaths)
print ('Outbreaks: %d' % df.reporting_date.value_counts()[0])
print ('Facilities: %d' % len(df.groupby(['County', 'FacilityName']).size().reset_index().rename(columns={0:'count'}).sort_values(by='count', ascending=False)))
###Output
Date: 2020-11-06
Cases: 36683
Deaths: 5253
Outbreaks: 1309
Facilities: 1116
###Markdown
4 - Get Facility Level data, augment and saveFacilities can have multiple outbreaks, need to sum these to get counts at the Facility level
###Code
df_facilities = df.groupby(['County', 'FacilityName']).sum()
df_facilities['CFR'] = df_facilities['deaths'] / df_facilities['confirmed_cases']
df_facilities.sort_values(by='confirmed_cases', ascending=False).to_csv('Reporting_data/IL_' + reporting_date + '_Facilities_LTC_data_v2.csv')
df_facilities.sort_values(by='confirmed_cases', ascending=False).head(10)
###Output
_____no_output_____
###Markdown
4 - County Level Data & Charts
###Code
# County Level Data
df_county = df.groupby(by=['County']).sum()
df_county['CFR'] = (df_county['deaths'] / df_county['confirmed_cases'])
df_county.sort_values('deaths', ascending=False).to_csv('Reporting_data/IL_' + reporting_date + '_County_LTC_stats_v2.csv')
df_county.sort_values('deaths', ascending=False).head(10)
# import altair as alt
# df1=df_county.sort_values(by=['deaths'], ascending=False).reset_index()
# cols = ['Deaths Non LTC', 'LTC Deaths']
# cols = ['LTC Deaths', 'Deaths Non LTC']23
# chart1 = alt.Chart(df_county.sort_values(by=['deaths'], ascending=False).reset_index()).mark_bar().encode(
# x='deaths:Q',
# y=alt.Y('County:O', sort='-x'),
# tooltip=['County', 'deaths', 'confirmed_cases', 'CFR']
# )
# chart2=chart1.encode(x=alt.X('CFR', axis=alt.Axis(format='%')))
# #chart2=chart1.encode(x=alt.X('CFR'))
# chart1 | chart2
# import altair as alt
# df1=df_county.sort_values(by=['deaths'], ascending=False).reset_index()
# cols = ['Deaths Non LTC', 'LTC Deaths']
# cols = ['LTC Deaths', 'Deaths Non LTC']
# chart1 = alt.Chart(df_county.sort_values(by=['deaths'], ascending=False).reset_index()).mark_bar().encode(
# x='deaths:Q',
# y=alt.Y('County:O'),
# tooltip=['County', 'deaths', 'confirmed_cases', 'CFR']
# )
# chart2=chart1.encode(x=alt.X('CFR', axis=alt.Axis(format='%')))
# #chart2=chart1.encode(x=alt.X('CFR'))
# chart1 | chart2
###Output
_____no_output_____ |