path
stringlengths 7
265
| concatenated_notebook
stringlengths 46
17M
|
---|---|
10_error_handling.ipynb | ###Markdown
Error HandlingExceptions are useful for more than just signalling errors. They can also be used to help you handle the error, and potentially even fix the problem (true self-healing program!).Consider this cut down version of the `.setHeight` function from the last session...
###Code
def setHeight(height):
if height < 0 or height > 2.5:
raise ValueError("Invalid height: %s. This should be between 0 and 2.5 m" % height)
print("setting the height to %s" % height)
###Output
_____no_output_____
###Markdown
The code currently correctly detects if the user supplies a height that is below 0 or above 2.5. However, what about when the user tries to set the height to something that is not a number?
###Code
setHeight("cat")
###Output
_____no_output_____
###Markdown
We get a weird error message that says we have a `TypeError`, as you cannot order a string and an integer.One way to address this is to ask that `height` is converted to a `float`, using `height = float(height)`
###Code
def setHeight(height):
height = float(height)
if height < 0 or height > 2.5:
raise ValueError("Invalid height: %s. This should be between 0 and 2.5 m" % height)
print("setting the height to %s" % height)
###Output
_____no_output_____
###Markdown
However, this hasn't made the error any easier to understand, as we now get a `ValueError` raised...
###Code
setHeight("cat")
###Output
_____no_output_____
###Markdown
The solution is for us to handle the exception, using a `try...except` block
###Code
def setHeight(height):
try:
height = float(height)
except:
raise TypeError("Invalid height: '%s'. You can only set the height to a numeric value" % height)
if height < 0 or height > 2.5:
raise ValueError("Invalid height: %s. This should be between 0 and 2.5 m" % height)
print("setting the height to %s" % height)
setHeight("cat")
###Output
_____no_output_____
###Markdown
What's happened here? The `try:` line starts a try-block. The code that is in the try-block is run. If any of this code raises an exception, then execution stops in the try-block, and switches instead to the code in the except-block (everything within the `except:` block). In our case, `float(height)` raised an exception, so execution jumped to the except-block, in which we ran the `raise TypeError(...)` code.Now the error is much more informative, allowing the user to better understand what has gone wrong. However, exception handling can do more than this. It can allow you to fix the problem. Consider this example...
###Code
setHeight("1.8 m")
###Output
_____no_output_____
###Markdown
We as humans can see that this could be an acceptable input. However, the computer needs help to understand. We can add code to the except-block that can try to resolve the problem. For example, imagine we had a function that could interpret heights from strings...
###Code
def string_to_height(height):
"""This function tries to interpret the passed argument as a height
in meters. The format should be 'X m', 'X meter' or 'X meters',
where 'X' is a number
"""
# convert height to a string - this always works
height = str(height)
words = height.split(" ")
if len(words) == 2:
if words[1] == "m" or words[1] == "meter" or words[1] == "meters":
try:
return float(words[0])
except:
pass
# Getting here means that we haven't been able to extract a valid height
raise TypeError("Cannot extract a valid height from '%s'" % height)
###Output
_____no_output_____
###Markdown
We can now call this function from within the except-block of `setHeight`
###Code
def setHeight(height):
try:
height = float(height)
except:
height = string_to_height(height)
if height < 0 or height > 2.5:
raise ValueError("Invalid height: %s. This should be between 0 and 2.5 m" % height)
print("setting the height to %s" % height)
setHeight("1.8 m")
###Output
_____no_output_____
###Markdown
Exercise Exercise 1Here is a copy of the `Person` class from the last session. Edit the `setHeight` function so that it uses exception handling and the `string_to_height` function to correctly interpret heights such as "1.8 m", and so that it gives a useful error message if it is given something weird. Check that the function correctly responds to a range of valid and invalid inputs.
###Code
class Person:
"""Class that holds a person's height"""
def __init__(self, height=0, weight=0):
"""Construct a person with the specified name, height and weight"""
self.setHeight(height)
self.setWeight(weight)
def setHeight(self, height):
"""Set the person's height in meters"""
if height < 0 or height > 2.5:
raise ValueError("Invalid height: %s. This shoud be between 0 and 2.5 meters" % height)
self._height = height
def setWeight(self, weight):
"""Set the person's weight in kilograms"""
if weight < 0 or weight > 500:
raise ValueError("Invalid weight: %s. This should be between 0 and 500 kilograms" % weight)
self._weight = weight
def getHeight(self):
"""Return the person's height in meters"""
return self._height
def getWeight(self):
"""Return the person's weight in kilograms"""
return self._weight
def bmi(self):
"""Return the person's body mass index (bmi)"""
if (self.getHeight() == 0 or self.getWeight() == 0):
raise NullPersonError("Cannot calculate the BMI of a person with zero "
"height or weight (%s,%s)" % (self.getHeight(),self.getWeight()))
return self.getWeight() / self.getHeight()**2
###Output
_____no_output_____ |
C4_Convolutional Neural Network/Week_1 A2/Convolution_model_Application.ipynb | ###Markdown
Convolutional Neural Networks: ApplicationWelcome to Course 4's second assignment! In this notebook, you will:- Create a mood classifer using the TF Keras Sequential API- Build a ConvNet to identify sign language digits using the TF Keras Functional API**After this assignment you will be able to:**- Build and train a ConvNet in TensorFlow for a __binary__ classification problem- Build and train a ConvNet in TensorFlow for a __multiclass__ classification problem- Explain different use cases for the Sequential and Functional APIsTo complete this assignment, you should already be familiar with TensorFlow. If you are not, please refer back to the **TensorFlow Tutorial** of the third week of Course 2 ("**Improving deep neural networks**"). Table of Contents- [1 - Packages](1) - [1.1 - Load the Data and Split the Data into Train/Test Sets](1-1)- [2 - Layers in TF Keras](2)- [3 - The Sequential API](3) - [3.1 - Create the Sequential Model](3-1) - [Exercise 1 - happyModel](ex-1) - [3.2 - Train and Evaluate the Model](3-2)- [4 - The Functional API](4) - [4.1 - Load the SIGNS Dataset](4-1) - [4.2 - Split the Data into Train/Test Sets](4-2) - [4.3 - Forward Propagation](4-3) - [Exercise 2 - convolutional_model](ex-2) - [4.4 - Train the Model](4-4)- [5 - History Object](5)- [6 - Bibliography](6) 1 - PackagesAs usual, begin by loading in the packages.
###Code
import math
import numpy as np
import h5py
import matplotlib.pyplot as plt
from matplotlib.pyplot import imread
import scipy
from PIL import Image
import pandas as pd
import tensorflow as tf
import tensorflow.keras.layers as tfl
from tensorflow.python.framework import ops
from cnn_utils import *
from test_utils import summary, comparator
%matplotlib inline
np.random.seed(1)
###Output
_____no_output_____
###Markdown
1.1 - Load the Data and Split the Data into Train/Test SetsYou'll be using the Happy House dataset for this part of the assignment, which contains images of peoples' faces. Your task will be to build a ConvNet that determines whether the people in the images are smiling or not -- because they only get to enter the house if they're smiling!
###Code
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_happy_dataset()
# Normalize image vectors
X_train = X_train_orig/255.
X_test = X_test_orig/255.
# Reshape
Y_train = Y_train_orig.T
Y_test = Y_test_orig.T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 600
number of test examples = 150
X_train shape: (600, 64, 64, 3)
Y_train shape: (600, 1)
X_test shape: (150, 64, 64, 3)
Y_test shape: (150, 1)
###Markdown
You can display the images contained in the dataset. Images are **64x64** pixels in RGB format (3 channels).
###Code
index = 124
plt.imshow(X_train_orig[index]) #display sample training image
plt.show()
###Output
_____no_output_____
###Markdown
2 - Layers in TF Keras In the previous assignment, you created layers manually in numpy. In TF Keras, you don't have to write code directly to create layers. Rather, TF Keras has pre-defined layers you can use. When you create a layer in TF Keras, you are creating a function that takes some input and transforms it into an output you can reuse later. Nice and easy! 3 - The Sequential APIIn the previous assignment, you built helper functions using `numpy` to understand the mechanics behind convolutional neural networks. Most practical applications of deep learning today are built using programming frameworks, which have many built-in functions you can simply call. Keras is a high-level abstraction built on top of TensorFlow, which allows for even more simplified and optimized model creation and training. For the first part of this assignment, you'll create a model using TF Keras' Sequential API, which allows you to build layer by layer, and is ideal for building models where each layer has **exactly one** input tensor and **one** output tensor. As you'll see, using the Sequential API is simple and straightforward, but is only appropriate for simpler, more straightforward tasks. Later in this notebook you'll spend some time building with a more flexible, powerful alternative: the Functional API. 3.1 - Create the Sequential ModelAs mentioned earlier, the TensorFlow Keras Sequential API can be used to build simple models with layer operations that proceed in a sequential order. You can also add layers incrementally to a Sequential model with the `.add()` method, or remove them using the `.pop()` method, much like you would in a regular Python list.Actually, you can think of a Sequential model as behaving like a list of layers. Like Python lists, Sequential layers are ordered, and the order in which they are specified matters. If your model is non-linear or contains layers with multiple inputs or outputs, a Sequential model wouldn't be the right choice!For any layer construction in Keras, you'll need to specify the input shape in advance. This is because in Keras, the shape of the weights is based on the shape of the inputs. The weights are only created when the model first sees some input data. Sequential models can be created by passing a list of layers to the Sequential constructor, like you will do in the next assignment. Exercise 1 - happyModelImplement the `happyModel` function below to build the following model: `ZEROPAD2D -> CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> FLATTEN -> DENSE`. Take help from [tf.keras.layers](https://www.tensorflow.org/api_docs/python/tf/keras/layers) Also, plug in the following parameters for all the steps: - [ZeroPadding2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ZeroPadding2D): padding 3, input shape 64 x 64 x 3 - [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D): Use 32 7x7 filters, stride 1 - [BatchNormalization](https://www.tensorflow.org/api_docs/python/tf/keras/layers/BatchNormalization): for axis 3 - [ReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU) - [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D): Using default parameters - [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the previous output. - Fully-connected ([Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer: Apply a fully connected layer with 1 neuron and a sigmoid activation. **Hint:** Use **tfl** as shorthand for **tensorflow.keras.layers**
###Code
# GRADED FUNCTION: happyModel
def happyModel():
"""
Implements the forward propagation for the binary classification model:
ZEROPAD2D -> CONV2D -> BATCHNORM -> RELU -> MAXPOOL -> FLATTEN -> DENSE
Note that for simplicity and grading purposes, you'll hard-code all the values
such as the stride and kernel (filter) sizes.
Normally, functions should take these values as function parameters.
Arguments:
None
Returns:
model -- TF Keras model (object containing the information for the entire training process)
"""
model = tf.keras.Sequential([
## ZeroPadding2D with padding 3, input shape of 64 x 64 x 3
## Conv2D with 32 7x7 filters and stride of 1
## BatchNormalization for axis 3
## ReLU
## Max Pooling 2D with default parameters
## Flatten layer
## Dense layer with 1 unit for output & 'sigmoid' activation
# YOUR CODE STARTS HERE
tf.keras.Input(shape=(64,64,3)),
tfl.ZeroPadding2D(padding=3),
tfl.Conv2D(32, 7),
tfl.BatchNormalization(axis=3),
tfl.ReLU(),
tfl.MaxPool2D(),
tfl.Flatten(),
tfl.Dense(units=1, activation="sigmoid" )
# YOUR CODE ENDS HERE
])
return model
happy_model = happyModel()
# Print a summary for each layer
for layer in summary(happy_model):
print(layer)
output = [['ZeroPadding2D', (None, 70, 70, 3), 0, ((3, 3), (3, 3))],
['Conv2D', (None, 64, 64, 32), 4736, 'valid', 'linear', 'GlorotUniform'],
['BatchNormalization', (None, 64, 64, 32), 128],
['ReLU', (None, 64, 64, 32), 0],
['MaxPooling2D', (None, 32, 32, 32), 0, (2, 2), (2, 2), 'valid'],
['Flatten', (None, 32768), 0],
['Dense', (None, 1), 32769, 'sigmoid']]
comparator(summary(happy_model), output)
###Output
['ZeroPadding2D', (None, 70, 70, 3), 0, ((3, 3), (3, 3))]
['Conv2D', (None, 64, 64, 32), 4736, 'valid', 'linear', 'GlorotUniform']
['BatchNormalization', (None, 64, 64, 32), 128]
['ReLU', (None, 64, 64, 32), 0]
['MaxPooling2D', (None, 32, 32, 32), 0, (2, 2), (2, 2), 'valid']
['Flatten', (None, 32768), 0]
['Dense', (None, 1), 32769, 'sigmoid']
[32mAll tests passed.
###Code
happy_model.compile(optimizer='adam',
loss='binary_crossentropy',
metrics=['accuracy'])
###Output
_____no_output_____
###Markdown
It's time to check your model's parameters with the `.summary()` method. This will display the types of layers you have, the shape of the outputs, and how many parameters are in each layer.
###Code
happy_model.summary()
###Output
Model: "sequential_2"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
zero_padding2d_6 (ZeroPaddin (None, 70, 70, 3) 0
_________________________________________________________________
conv2d_6 (Conv2D) (None, 64, 64, 32) 4736
_________________________________________________________________
batch_normalization_6 (Batch (None, 64, 64, 32) 128
_________________________________________________________________
re_lu_4 (ReLU) (None, 64, 64, 32) 0
_________________________________________________________________
max_pooling2d_4 (MaxPooling2 (None, 32, 32, 32) 0
_________________________________________________________________
flatten_4 (Flatten) (None, 32768) 0
_________________________________________________________________
dense_2 (Dense) (None, 1) 32769
=================================================================
Total params: 37,633
Trainable params: 37,569
Non-trainable params: 64
_________________________________________________________________
###Markdown
3.2 - Train and Evaluate the ModelAfter creating the model, compiling it with your choice of optimizer and loss function, and doing a sanity check on its contents, you are now ready to build! Simply call `.fit()` to train. That's it! No need for mini-batching, saving, or complex backpropagation computations. That's all been done for you, as you're using a TensorFlow dataset with the batches specified already. You do have the option to specify epoch number or minibatch size if you like (for example, in the case of an un-batched dataset).
###Code
happy_model.fit(X_train, Y_train, epochs=10, batch_size=16)
###Output
Epoch 1/10
38/38 [==============================] - 4s 103ms/step - loss: 1.0915 - accuracy: 0.7017
Epoch 2/10
38/38 [==============================] - 4s 100ms/step - loss: 0.3747 - accuracy: 0.8500
Epoch 3/10
38/38 [==============================] - 4s 95ms/step - loss: 0.1534 - accuracy: 0.9350
Epoch 4/10
38/38 [==============================] - 4s 97ms/step - loss: 0.1568 - accuracy: 0.9417
Epoch 5/10
38/38 [==============================] - 4s 95ms/step - loss: 0.1148 - accuracy: 0.9550
Epoch 6/10
38/38 [==============================] - 4s 97ms/step - loss: 0.0631 - accuracy: 0.9817
Epoch 7/10
38/38 [==============================] - 4s 97ms/step - loss: 0.0847 - accuracy: 0.9667
Epoch 8/10
38/38 [==============================] - 4s 100ms/step - loss: 0.1155 - accuracy: 0.9567
Epoch 9/10
38/38 [==============================] - 4s 100ms/step - loss: 0.0635 - accuracy: 0.9783
Epoch 10/10
38/38 [==============================] - 4s 98ms/step - loss: 0.0489 - accuracy: 0.9817
###Markdown
After that completes, just use `.evaluate()` to evaluate against your test set. This function will print the value of the loss function and the performance metrics specified during the compilation of the model. In this case, the `binary_crossentropy` and the `accuracy` respectively.
###Code
happy_model.evaluate(X_test, Y_test)
###Output
5/5 [==============================] - 0s 30ms/step - loss: 0.1615 - accuracy: 0.9200
###Markdown
Easy, right? But what if you need to build a model with shared layers, branches, or multiple inputs and outputs? This is where Sequential, with its beautifully simple yet limited functionality, won't be able to help you. Next up: Enter the Functional API, your slightly more complex, highly flexible friend. 4 - The Functional API Welcome to the second half of the assignment, where you'll use Keras' flexible [Functional API](https://www.tensorflow.org/guide/keras/functional) to build a ConvNet that can differentiate between 6 sign language digits. The Functional API can handle models with non-linear topology, shared layers, as well as layers with multiple inputs or outputs. Imagine that, where the Sequential API requires the model to move in a linear fashion through its layers, the Functional API allows much more flexibility. Where Sequential is a straight line, a Functional model is a graph, where the nodes of the layers can connect in many more ways than one. In the visual example below, the one possible direction of the movement Sequential model is shown in contrast to a skip connection, which is just one of the many ways a Functional model can be constructed. A skip connection, as you might have guessed, skips some layer in the network and feeds the output to a later layer in the network. Don't worry, you'll be spending more time with skip connections very soon! 4.1 - Load the SIGNS DatasetAs a reminder, the SIGNS dataset is a collection of 6 signs representing numbers from 0 to 5.
###Code
# Loading the data (signs)
X_train_orig, Y_train_orig, X_test_orig, Y_test_orig, classes = load_signs_dataset()
###Output
_____no_output_____
###Markdown
The next cell will show you an example of a labelled image in the dataset. Feel free to change the value of `index` below and re-run to see different examples.
###Code
# Example of an image from the dataset
index = 9
plt.imshow(X_train_orig[index])
print ("y = " + str(np.squeeze(Y_train_orig[:, index])))
###Output
y = 4
###Markdown
4.2 - Split the Data into Train/Test SetsIn Course 2, you built a fully-connected network for this dataset. But since this is an image dataset, it is more natural to apply a ConvNet to it.To get started, let's examine the shapes of your data.
###Code
X_train = X_train_orig/255.
X_test = X_test_orig/255.
Y_train = convert_to_one_hot(Y_train_orig, 6).T
Y_test = convert_to_one_hot(Y_test_orig, 6).T
print ("number of training examples = " + str(X_train.shape[0]))
print ("number of test examples = " + str(X_test.shape[0]))
print ("X_train shape: " + str(X_train.shape))
print ("Y_train shape: " + str(Y_train.shape))
print ("X_test shape: " + str(X_test.shape))
print ("Y_test shape: " + str(Y_test.shape))
###Output
number of training examples = 1080
number of test examples = 120
X_train shape: (1080, 64, 64, 3)
Y_train shape: (1080, 6)
X_test shape: (120, 64, 64, 3)
Y_test shape: (120, 6)
###Markdown
4.3 - Forward PropagationIn TensorFlow, there are built-in functions that implement the convolution steps for you. By now, you should be familiar with how TensorFlow builds computational graphs. In the [Functional API](https://www.tensorflow.org/guide/keras/functional), you create a graph of layers. This is what allows such great flexibility.However, the following model could also be defined using the Sequential API since the information flow is on a single line. But don't deviate. What we want you to learn is to use the functional API.Begin building your graph of layers by creating an input node that functions as a callable object:- **input_img = tf.keras.Input(shape=input_shape):** Then, create a new node in the graph of layers by calling a layer on the `input_img` object: - **tf.keras.layers.Conv2D(filters= ... , kernel_size= ... , padding='same')(input_img):** Read the full documentation on [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D).- **tf.keras.layers.MaxPool2D(pool_size=(f, f), strides=(s, s), padding='same'):** `MaxPool2D()` downsamples your input using a window of size (f, f) and strides of size (s, s) to carry out max pooling over each window. For max pooling, you usually operate on a single example at a time and a single channel at a time. Read the full documentation on [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D).- **tf.keras.layers.ReLU():** computes the elementwise ReLU of Z (which can be any shape). You can read the full documentation on [ReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU).- **tf.keras.layers.Flatten()**: given a tensor "P", this function takes each training (or test) example in the batch and flattens it into a 1D vector. * If a tensor P has the shape (batch_size,h,w,c), it returns a flattened tensor with shape (batch_size, k), where $k=h \times w \times c$. "k" equals the product of all the dimension sizes other than the first dimension. * For example, given a tensor with dimensions [100, 2, 3, 4], it flattens the tensor to be of shape [100, 24], where 24 = 2 * 3 * 4. You can read the full documentation on [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten).- **tf.keras.layers.Dense(units= ... , activation='softmax')(F):** given the flattened input F, it returns the output computed using a fully connected layer. You can read the full documentation on [Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense).In the last function above (`tf.keras.layers.Dense()`), the fully connected layer automatically initializes weights in the graph and keeps on training them as you train the model. Hence, you did not need to initialize those weights when initializing the parameters.Lastly, before creating the model, you'll need to define the output using the last of the function's compositions (in this example, a Dense layer): - **outputs = tf.keras.layers.Dense(units=6, activation='softmax')(F)** Window, kernel, filter, poolThe words "kernel" and "filter" are used to refer to the same thing. The word "filter" accounts for the amount of "kernels" that will be used in a single convolution layer. "Pool" is the name of the operation that takes the max or average value of the kernels. This is why the parameter `pool_size` refers to `kernel_size`, and you use `(f,f)` to refer to the filter size. Pool size and kernel size refer to the same thing in different objects - They refer to the shape of the window where the operation takes place. Exercise 2 - convolutional_modelImplement the `convolutional_model` function below to build the following model: `CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> DENSE`. Use the functions above! Also, plug in the following parameters for all the steps: - [Conv2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Conv2D): Use 8 4 by 4 filters, stride 1, padding is "SAME" - [ReLU](https://www.tensorflow.org/api_docs/python/tf/keras/layers/ReLU) - [MaxPool2D](https://www.tensorflow.org/api_docs/python/tf/keras/layers/MaxPool2D): Use an 8 by 8 filter size and an 8 by 8 stride, padding is "SAME" - **Conv2D**: Use 16 2 by 2 filters, stride 1, padding is "SAME" - **ReLU** - **MaxPool2D**: Use a 4 by 4 filter size and a 4 by 4 stride, padding is "SAME" - [Flatten](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Flatten) the previous output. - Fully-connected ([Dense](https://www.tensorflow.org/api_docs/python/tf/keras/layers/Dense)) layer: Apply a fully connected layer with 6 neurons and a softmax activation.
###Code
# GRADED FUNCTION: convolutional_model
def convolutional_model(input_shape):
"""
Implements the forward propagation for the model:
CONV2D -> RELU -> MAXPOOL -> CONV2D -> RELU -> MAXPOOL -> FLATTEN -> DENSE
Note that for simplicity and grading purposes, you'll hard-code some values
such as the stride and kernel (filter) sizes.
Normally, functions should take these values as function parameters.
Arguments:
input_img -- input dataset, of shape (input_shape)
Returns:
model -- TF Keras model (object containing the information for the entire training process)
"""
input_img = tf.keras.Input(shape=input_shape)
## CONV2D: 8 filters 4x4, stride of 1, padding 'SAME'
# Z1 = None
## RELU
# A1 = None
## MAXPOOL: window 8x8, stride 8, padding 'SAME'
# P1 = None
## CONV2D: 16 filters 2x2, stride 1, padding 'SAME'
# Z2 = None
## RELU
# A2 = None
## MAXPOOL: window 4x4, stride 4, padding 'SAME'
# P2 = None
## FLATTEN
# F = None
## Dense layer
## 6 neurons in output layer. Hint: one of the arguments should be "activation='softmax'"
# outputs = None
# YOUR CODE STARTS HERE
Z1 = tfl.Conv2D(filters=8, kernel_size=4, strides=(1, 1), padding='same')(input_img)
A1 = tfl.ReLU()(Z1)
P1 = tfl.MaxPool2D(pool_size=(8,8), strides=8, padding='same')(A1)
Z2 = tfl.Conv2D(filters=16, kernel_size=2, strides=(1, 1), padding='same')(P1)
A2 = tfl.ReLU()(Z2)
P2 = tfl.MaxPool2D(pool_size=(4,4), strides=4, padding='same')(A2)
F = tfl.Flatten()(P2)
outputs = tfl.Dense(units=6, activation="softmax")(F)
# YOUR CODE ENDS HERE
model = tf.keras.Model(inputs=input_img, outputs=outputs)
return model
conv_model = convolutional_model((64, 64, 3))
conv_model.compile(optimizer='adam',
loss='categorical_crossentropy',
metrics=['accuracy'])
conv_model.summary()
output = [['InputLayer', [(None, 64, 64, 3)], 0],
['Conv2D', (None, 64, 64, 8), 392, 'same', 'linear', 'GlorotUniform'],
['ReLU', (None, 64, 64, 8), 0],
['MaxPooling2D', (None, 8, 8, 8), 0, (8, 8), (8, 8), 'same'],
['Conv2D', (None, 8, 8, 16), 528, 'same', 'linear', 'GlorotUniform'],
['ReLU', (None, 8, 8, 16), 0],
['MaxPooling2D', (None, 2, 2, 16), 0, (4, 4), (4, 4), 'same'],
['Flatten', (None, 64), 0],
['Dense', (None, 6), 390, 'softmax']]
comparator(summary(conv_model), output)
###Output
Model: "functional_3"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input_7 (InputLayer) [(None, 64, 64, 3)] 0
_________________________________________________________________
conv2d_11 (Conv2D) (None, 64, 64, 8) 392
_________________________________________________________________
re_lu_9 (ReLU) (None, 64, 64, 8) 0
_________________________________________________________________
max_pooling2d_9 (MaxPooling2 (None, 8, 8, 8) 0
_________________________________________________________________
conv2d_12 (Conv2D) (None, 8, 8, 16) 528
_________________________________________________________________
re_lu_10 (ReLU) (None, 8, 8, 16) 0
_________________________________________________________________
max_pooling2d_10 (MaxPooling (None, 2, 2, 16) 0
_________________________________________________________________
flatten_7 (Flatten) (None, 64) 0
_________________________________________________________________
dense_5 (Dense) (None, 6) 390
=================================================================
Total params: 1,310
Trainable params: 1,310
Non-trainable params: 0
_________________________________________________________________
[32mAll tests passed![0m
###Markdown
Both the Sequential and Functional APIs return a TF Keras model object. The only difference is how inputs are handled inside the object model! 4.4 - Train the Model
###Code
train_dataset = tf.data.Dataset.from_tensor_slices((X_train, Y_train)).batch(64)
test_dataset = tf.data.Dataset.from_tensor_slices((X_test, Y_test)).batch(64)
history = conv_model.fit(train_dataset, epochs=100, validation_data=test_dataset)
###Output
Epoch 1/100
17/17 [==============================] - 2s 107ms/step - loss: 1.8173 - accuracy: 0.1574 - val_loss: 1.7957 - val_accuracy: 0.1750
Epoch 2/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7911 - accuracy: 0.1722 - val_loss: 1.7902 - val_accuracy: 0.1750
Epoch 3/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7868 - accuracy: 0.2454 - val_loss: 1.7864 - val_accuracy: 0.2667
Epoch 4/100
17/17 [==============================] - 2s 111ms/step - loss: 1.7825 - accuracy: 0.2815 - val_loss: 1.7825 - val_accuracy: 0.2833
Epoch 5/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7779 - accuracy: 0.2870 - val_loss: 1.7780 - val_accuracy: 0.3083
Epoch 6/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7713 - accuracy: 0.3231 - val_loss: 1.7733 - val_accuracy: 0.3167
Epoch 7/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7618 - accuracy: 0.3528 - val_loss: 1.7644 - val_accuracy: 0.3417
Epoch 8/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7494 - accuracy: 0.3731 - val_loss: 1.7546 - val_accuracy: 0.4000
Epoch 9/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7342 - accuracy: 0.4037 - val_loss: 1.7406 - val_accuracy: 0.4167
Epoch 10/100
17/17 [==============================] - 2s 106ms/step - loss: 1.7138 - accuracy: 0.4130 - val_loss: 1.7245 - val_accuracy: 0.4083
Epoch 11/100
17/17 [==============================] - 2s 111ms/step - loss: 1.6869 - accuracy: 0.4157 - val_loss: 1.7000 - val_accuracy: 0.4250
Epoch 12/100
17/17 [==============================] - 2s 111ms/step - loss: 1.6509 - accuracy: 0.4407 - val_loss: 1.6688 - val_accuracy: 0.4083
Epoch 13/100
17/17 [==============================] - 2s 106ms/step - loss: 1.6073 - accuracy: 0.4713 - val_loss: 1.6309 - val_accuracy: 0.4500
Epoch 14/100
17/17 [==============================] - 2s 106ms/step - loss: 1.5605 - accuracy: 0.4685 - val_loss: 1.5906 - val_accuracy: 0.4250
Epoch 15/100
17/17 [==============================] - 2s 106ms/step - loss: 1.5109 - accuracy: 0.4843 - val_loss: 1.5477 - val_accuracy: 0.4500
Epoch 16/100
17/17 [==============================] - 2s 106ms/step - loss: 1.4619 - accuracy: 0.4981 - val_loss: 1.5063 - val_accuracy: 0.4417
Epoch 17/100
17/17 [==============================] - 2s 106ms/step - loss: 1.4160 - accuracy: 0.5037 - val_loss: 1.4626 - val_accuracy: 0.4417
Epoch 18/100
17/17 [==============================] - 2s 111ms/step - loss: 1.3732 - accuracy: 0.5111 - val_loss: 1.4205 - val_accuracy: 0.4417
Epoch 19/100
17/17 [==============================] - 2s 111ms/step - loss: 1.3339 - accuracy: 0.5296 - val_loss: 1.3820 - val_accuracy: 0.4833
Epoch 20/100
17/17 [==============================] - 2s 112ms/step - loss: 1.2966 - accuracy: 0.5509 - val_loss: 1.3449 - val_accuracy: 0.5000
Epoch 21/100
17/17 [==============================] - 2s 112ms/step - loss: 1.2620 - accuracy: 0.5509 - val_loss: 1.3088 - val_accuracy: 0.5333
Epoch 22/100
17/17 [==============================] - 2s 106ms/step - loss: 1.2303 - accuracy: 0.5630 - val_loss: 1.2764 - val_accuracy: 0.5583
Epoch 23/100
17/17 [==============================] - 2s 111ms/step - loss: 1.2006 - accuracy: 0.5778 - val_loss: 1.2452 - val_accuracy: 0.5833
Epoch 24/100
17/17 [==============================] - 2s 106ms/step - loss: 1.1716 - accuracy: 0.5843 - val_loss: 1.2128 - val_accuracy: 0.6000
Epoch 25/100
17/17 [==============================] - 2s 106ms/step - loss: 1.1447 - accuracy: 0.5981 - val_loss: 1.1853 - val_accuracy: 0.6000
Epoch 26/100
17/17 [==============================] - 2s 107ms/step - loss: 1.1193 - accuracy: 0.6102 - val_loss: 1.1570 - val_accuracy: 0.6250
Epoch 27/100
17/17 [==============================] - 2s 106ms/step - loss: 1.0970 - accuracy: 0.6167 - val_loss: 1.1344 - val_accuracy: 0.6333
Epoch 28/100
17/17 [==============================] - 2s 107ms/step - loss: 1.0739 - accuracy: 0.6204 - val_loss: 1.1119 - val_accuracy: 0.6333
Epoch 29/100
17/17 [==============================] - 2s 107ms/step - loss: 1.0550 - accuracy: 0.6296 - val_loss: 1.0939 - val_accuracy: 0.6500
Epoch 30/100
17/17 [==============================] - 2s 106ms/step - loss: 1.0326 - accuracy: 0.6380 - val_loss: 1.0715 - val_accuracy: 0.6583
Epoch 31/100
17/17 [==============================] - 2s 107ms/step - loss: 1.0135 - accuracy: 0.6500 - val_loss: 1.0524 - val_accuracy: 0.6583
Epoch 32/100
17/17 [==============================] - 2s 106ms/step - loss: 0.9952 - accuracy: 0.6602 - val_loss: 1.0345 - val_accuracy: 0.6500
Epoch 33/100
17/17 [==============================] - 2s 107ms/step - loss: 0.9765 - accuracy: 0.6667 - val_loss: 1.0159 - val_accuracy: 0.6667
Epoch 34/100
17/17 [==============================] - 2s 112ms/step - loss: 0.9605 - accuracy: 0.6704 - val_loss: 1.0003 - val_accuracy: 0.6667
Epoch 35/100
17/17 [==============================] - 2s 106ms/step - loss: 0.9440 - accuracy: 0.6806 - val_loss: 0.9843 - val_accuracy: 0.6667
Epoch 36/100
17/17 [==============================] - 2s 106ms/step - loss: 0.9282 - accuracy: 0.6880 - val_loss: 0.9698 - val_accuracy: 0.6667
Epoch 37/100
17/17 [==============================] - 2s 106ms/step - loss: 0.9128 - accuracy: 0.6944 - val_loss: 0.9553 - val_accuracy: 0.6750
Epoch 38/100
17/17 [==============================] - 2s 108ms/step - loss: 0.8981 - accuracy: 0.7019 - val_loss: 0.9420 - val_accuracy: 0.6833
Epoch 39/100
17/17 [==============================] - 2s 106ms/step - loss: 0.8836 - accuracy: 0.7065 - val_loss: 0.9283 - val_accuracy: 0.6833
Epoch 40/100
17/17 [==============================] - 2s 106ms/step - loss: 0.8705 - accuracy: 0.7102 - val_loss: 0.9160 - val_accuracy: 0.7000
Epoch 41/100
17/17 [==============================] - 2s 111ms/step - loss: 0.8572 - accuracy: 0.7167 - val_loss: 0.9036 - val_accuracy: 0.6917
Epoch 42/100
17/17 [==============================] - 2s 112ms/step - loss: 0.8449 - accuracy: 0.7241 - val_loss: 0.8923 - val_accuracy: 0.7000
Epoch 43/100
17/17 [==============================] - 2s 111ms/step - loss: 0.8324 - accuracy: 0.7315 - val_loss: 0.8806 - val_accuracy: 0.7083
Epoch 44/100
17/17 [==============================] - 2s 112ms/step - loss: 0.8203 - accuracy: 0.7370 - val_loss: 0.8693 - val_accuracy: 0.7167
Epoch 45/100
17/17 [==============================] - 2s 111ms/step - loss: 0.8090 - accuracy: 0.7380 - val_loss: 0.8578 - val_accuracy: 0.7167
Epoch 46/100
17/17 [==============================] - 2s 111ms/step - loss: 0.7969 - accuracy: 0.7435 - val_loss: 0.8470 - val_accuracy: 0.7250
Epoch 47/100
17/17 [==============================] - 2s 106ms/step - loss: 0.7873 - accuracy: 0.7491 - val_loss: 0.8364 - val_accuracy: 0.7333
Epoch 48/100
17/17 [==============================] - 2s 106ms/step - loss: 0.7757 - accuracy: 0.7500 - val_loss: 0.8264 - val_accuracy: 0.7333
Epoch 49/100
17/17 [==============================] - 2s 111ms/step - loss: 0.7666 - accuracy: 0.7528 - val_loss: 0.8165 - val_accuracy: 0.7333
Epoch 50/100
17/17 [==============================] - 2s 106ms/step - loss: 0.7557 - accuracy: 0.7546 - val_loss: 0.8069 - val_accuracy: 0.7333
Epoch 51/100
17/17 [==============================] - 2s 107ms/step - loss: 0.7466 - accuracy: 0.7583 - val_loss: 0.7975 - val_accuracy: 0.7333
Epoch 52/100
17/17 [==============================] - 2s 107ms/step - loss: 0.7368 - accuracy: 0.7602 - val_loss: 0.7881 - val_accuracy: 0.7333
Epoch 53/100
17/17 [==============================] - 2s 106ms/step - loss: 0.7283 - accuracy: 0.7630 - val_loss: 0.7787 - val_accuracy: 0.7333
Epoch 54/100
17/17 [==============================] - 2s 106ms/step - loss: 0.7184 - accuracy: 0.7694 - val_loss: 0.7706 - val_accuracy: 0.7333
Epoch 55/100
17/17 [==============================] - 2s 106ms/step - loss: 0.7100 - accuracy: 0.7704 - val_loss: 0.7601 - val_accuracy: 0.7333
Epoch 56/100
17/17 [==============================] - 2s 106ms/step - loss: 0.7007 - accuracy: 0.7759 - val_loss: 0.7522 - val_accuracy: 0.7417
Epoch 57/100
17/17 [==============================] - 2s 106ms/step - loss: 0.6932 - accuracy: 0.7759 - val_loss: 0.7432 - val_accuracy: 0.7583
###Markdown
5 - History Object The history object is an output of the `.fit()` operation, and provides a record of all the loss and metric values in memory. It's stored as a dictionary that you can retrieve at `history.history`:
###Code
history.history
###Output
_____no_output_____
###Markdown
Now visualize the loss over time using `history.history`:
###Code
# The history.history["loss"] entry is a dictionary with as many values as epochs that the
# model was trained on.
df_loss_acc = pd.DataFrame(history.history)
df_loss= df_loss_acc[['loss','val_loss']]
df_loss.rename(columns={'loss':'train','val_loss':'validation'},inplace=True)
df_acc= df_loss_acc[['accuracy','val_accuracy']]
df_acc.rename(columns={'accuracy':'train','val_accuracy':'validation'},inplace=True)
df_loss.plot(title='Model loss',figsize=(12,8)).set(xlabel='Epoch',ylabel='Loss')
df_acc.plot(title='Model Accuracy',figsize=(12,8)).set(xlabel='Epoch',ylabel='Accuracy')
###Output
_____no_output_____ |
Heat eqn/Heat eqn 2.ipynb | ###Markdown
Imports
###Code
import tensorflow as tf
print(tf.version)
print(tf.test.is_built_with_cuda())
print(tf.test.is_gpu_available(cuda_only=False, min_cuda_compute_capability=None))
import matplotlib.pyplot as plt
from mpl_toolkits.mplot3d import Axes3D
# from plotting import newfig, savefig
import matplotlib.gridspec as gridspec
from mpl_toolkits.axes_grid1 import make_axes_locatable
from matplotlib.animation import FuncAnimation, PillowWriter
import numpy as np
import scipy.io
from scipy.interpolate import griddata
import time
from pyDOE import lhs
import pickle as pkl
%matplotlib widget
###Output
_____no_output_____
###Markdown
Equation
###Code
k = 0.061644
###Output
_____no_output_____
###Markdown
$$\frac{\partial u}{\partial t} = k \frac{\partial^2 u}{\partial x^2}$$ Load Data Model
###Code
# Layers
u_layers = [2, 50, 50, 50, 50, 1]
pde_layers = [3, 100, 100, 1]
layers = [2, 50, 50, 50, 50, 1]
# tf placeholders for Identification
t_tf = tf.placeholder(tf.float32, shape=[None, 1])
x_tf = tf.placeholder(tf.float32, shape=[None, 1])
u_tf = tf.placeholder(tf.float32, shape=[None, 1])
t_tf, x_tf, u_tf
def initialize_NN(layers):
weights = []
biases = []
num_layers = len(layers)
for l in range(0, num_layers - 1):
W = xavier_init(size=[layers[l], layers[l + 1]])
b = tf.Variable(tf.zeros([1, layers[l + 1]], dtype=tf.float32),
dtype=tf.float32)
weights.append(W)
biases.append(b)
return weights, biases
def xavier_init(size):
in_dim = size[0]
out_dim = size[1]
xavier_stddev = np.sqrt(2 / (in_dim + out_dim))
return tf.Variable(tf.truncated_normal([in_dim, out_dim],
stddev=xavier_stddev,
dtype=tf.float32),
dtype=tf.float32)
def neural_net(X, weights, biases):
num_layers = len(weights) + 1
H = X
for l in range(0, num_layers - 2):
W = weights[l]
b = biases[l]
H = tf.sin(tf.add(tf.matmul(H, W), b))
W = weights[-1]
b = biases[-1]
Y = tf.add(tf.matmul(H, W), b)
return Y
weights, biases = initialize_NN(layers)
# weights, biases
# load weights and biases
with open(__NAME + '/weights.pkl', 'rb') as db_file:
W_pkl = pkl.load(db_file)
with open(__NAME + '/biases.pkl', 'rb') as db_file:
B_pkl = pkl.load(db_file)
W = []
B = []
for w, b in zip(W_pkl, B_pkl):
W.append(tf.Variable(w))
B.append(tf.Variable(b))
weights = W
biases = B
lb_tf = tf.placeholder(tf.float32, shape=[2])
ub_tf = tf.placeholder(tf.float32, shape=[2])
# tf placeholders for Solution
t0_tf = tf.placeholder(tf.float32, shape=[None, 1])
x0_tf = tf.placeholder(tf.float32, shape=[None, 1])
u0_tf = tf.placeholder(tf.float32, shape=[None, 1])
t_lb_tf = tf.placeholder(tf.float32, shape=[None, 1])
x_lb_tf = tf.placeholder(tf.float32, shape=[None, 1])
t_ub_tf = tf.placeholder(tf.float32, shape=[None, 1])
x_ub_tf = tf.placeholder(tf.float32, shape=[None, 1])
u_x_ub_tf = tf.placeholder(tf.float32, shape=[None, 1])
u_x_lb_tf = tf.placeholder(tf.float32, shape=[None, 1])
t_f_tf = tf.placeholder(tf.float32, shape=[None, 1])
x_f_tf = tf.placeholder(tf.float32, shape=[None, 1])
def sol_net_u(t, x):
X = tf.concat([t, x], 1)
H = 2.0 * (X - lb_tf) / (ub_tf - lb_tf) - 1.0
u = neural_net(H, weights, biases)
u_x = tf.gradients(u, x)[0]
return u, u_x
def sol_net_f(t, x):
u, u_x = sol_net_u(t, x)
u_t = tf.gradients(u, t)[0]
u_xx = tf.gradients(u_x, x)[0]
f = u_t - k * u_xx
return f
# tf graphs for Solution
u0_pred, u_x0_pred = sol_net_u(t0_tf, x0_tf)
u_lb_pred, u_x_lb_pred = sol_net_u(t_lb_tf, x_lb_tf)
u_ub_pred, u_x_ub_pred = sol_net_u(t_ub_tf, x_ub_tf)
sol_f_pred = sol_net_f(t_f_tf, x_f_tf)
# loss for Solution
sol_loss = tf.reduce_sum(tf.square(u0_tf - u0_pred)) + \
tf.reduce_sum(tf.square(u_x_lb_tf - u_x_lb_pred)) + \
tf.reduce_sum(tf.square(u_x_ub_tf - u_x_ub_pred)) + \
tf.reduce_sum(tf.square(sol_f_pred))
# Optimizer for Solution
sol_optimizer = tf.contrib.opt.ScipyOptimizerInterface(
sol_loss,
var_list = weights + biases,
method='L-BFGS-B',
options={
'maxiter': 50000,
'maxfun': 50000,
'maxcor': 50,
'maxls': 50,
'ftol': 1.0 * np.finfo(float).eps
})
adam_optimizer = tf.train.AdamOptimizer()
sol_train_op_Adam = adam_optimizer.minimize(
sol_loss,
var_list= weights + biases)
# tf session
sess = tf.Session(config=tf.ConfigProto(
allow_soft_placement=True, log_device_placement=True))
init = tf.global_variables_initializer()
sess.run(init)
###Output
Device mapping:
/job:localhost/replica:0/task:0/device:GPU:0 -> device: 0, name: NVIDIA GeForce GTX 1650 Ti, pci bus id: 0000:01:00.0, compute capability: 7.5
###Markdown
Training Prepare data
###Code
lb = np.array([0.0, 0.0])
ub = np.array([2.0, 1.0])
N = 10
fig = plt.figure()
ax = fig.gca()
ax.set_xlim(lb[0], ub[0])
ax.set_ylim(lb[1], ub[1])
ax.set_xticks(np.arange(lb[0],ub[0],(ub[0] - lb[0])/N))
ax.set_yticks(np.arange(lb[1],ub[1],(ub[1] - lb[1])/N))
plt.grid()
l = lb + (ub - lb) * lhs(2, N)
plt.scatter(l[:, 0], l[:, 1], color="r", label="lhs")
plt.title("Latin Hypercube Sampling\nN=10")
ax.set_xlabel('$t$')
ax.set_ylabel('$x$')
fig.set_figheight(3.8)
fig.set_figwidth(6)
plt.tight_layout()
N0 = 200
N_b = 300
N_f = 20000
(N0, N_b, N_f)
t_data = np.linspace(lb[0], ub[0], N_b)[:, None]
x_data = np.linspace(lb[1], ub[1], N0)[:, None]
T_sol, X_sol = np.meshgrid(t_data, x_data)
# U_sol = u_data
X_sol_star = np.hstack(
(T_sol.flatten()[:, None],
X_sol.flatten()[:, None])
)
# U_sol_star = U_sol.flatten()[:, None]
print(X_sol_star.shape, X_sol_star[0:10], sep='\n')
L = 1
u_data = np.sin(np.pi * x_data / L)
X0 = np.hstack((T_sol[:, 0:1], X_sol[:, 0:1])) # left boundary
X_lb = np.hstack((T_sol[0:1, :].T, X_sol[0:1, :].T)) # lower boundary
X_ub = np.hstack((T_sol[0:1, :].T, np.repeat(ub[1], t_data.shape[0])[:, None])) # upper boundary
# shuffled initial boundary data (left boundary)
idx_x = np.random.choice(x_data.shape[0], N0, replace=False)
X0_train = X0[idx_x, :]
u0_train = u_data[idx_x, 0:1]
# shuffle time data
idx_t = np.random.choice(t_data.shape[0], N_b, replace=False)
tb_train = t_data[idx_t, :]
X_f_train = lb + (ub - lb) * lhs(2, N_f)
fig, ax = plt.subplots()
ax.set_xlim(lb[0] -0.1, ub[0])
ax.set_ylim(lb[1] - 0.4, ub[1] + 0.4)
fig.set_figheight(3.2)
fig.set_figwidth(6)
ax.scatter(X0_train[:, 0], X0_train[:, 1], s=4, marker='.')
ax.scatter(tb_train[:, 0], np.repeat(lb[1], N_b), s=4, marker='.')
ax.scatter(tb_train[:, 0], np.repeat(ub[1], N_b), s=4, marker='.')
ax.scatter(X_f_train[:, 0], X_f_train[:, 1], s=4, marker='.', edgecolors='none')
# ax.imshow(u0_train, extent=(t_data[0, 0], t_data[1, 0], x0_train.max(), x0_train.min()), aspect='auto')
plt.tight_layout()
fig, ax = plt.subplots()
ax.set_xlabel('$x$')
ax.set_ylabel('$u$')
ax.plot(x_data[:, 0], u_data[:, 0:1])
fig.set_figheight(3.2)
fig.set_figwidth(6)
plt.tight_layout()
def callback(loss):
print('Loss: %e' % (loss))
tf_dict = {
lb_tf: lb,
ub_tf: ub,
t0_tf: X0_train[:, 0:1],
x0_tf: X0_train[:, 1:2],
u0_tf: u0_train,
t_lb_tf: X_lb[:, 0:1],
x_lb_tf: X_lb[:, 1:2],
t_ub_tf: X_ub[:, 0:1],
x_ub_tf: X_ub[:, 1:2],
u_x_lb_tf: np.repeat(0, N_b)[:, None],
u_x_ub_tf: np.repeat(0, N_b)[:, None],
t_f_tf: X_f_train[:, 0:1],
x_f_tf: X_f_train[:, 1:2]
}
fig, ax = plt.subplots()
ax.set_xlabel('$x$')
ax.set_ylabel('$u$')
ax.scatter(X0_train[:, 1:2], u0_train)
fig.set_figheight(3.2)
fig.set_figwidth(6)
plt.tight_layout()
start_time = time.time()
it = 0
end = False
while not(end):
sess.run(sol_train_op_Adam, tf_dict)
# Print
if it % 10 == 0:
elapsed = time.time() - start_time
loss_value = sess.run(sol_loss, tf_dict)
print('It: %d, Loss: %.3e, Time: %.2f' %
(it, loss_value, elapsed))
start_time = time.time()
if loss_value < 5 * 10**(-1):
end = True
it = it + 1
sol_optimizer.minimize(sess,
feed_dict=tf_dict,
fetches=[sol_loss],
loss_callback=callback)
sess.run(sol_loss, feed_dict=tf_dict)
with open(__NAME + '/weights.pkl', 'wb') as db_file:
pkl.dump(obj=sess.run(weights), file=db_file)
with open(__NAME + '/biases.pkl', 'wb') as db_file:
pkl.dump(obj=sess.run(biases), file=db_file)
u_pred = sess.run(u0_pred, {
lb_tf: lb,
ub_tf: ub,
t0_tf: X_sol_star[:, 0:1],
x0_tf: X_sol_star[:, 1:2]
})
fig = plt.figure(figsize=(4*1.75,4), dpi=200)
ax = fig.gca()
ax.set_xlim(lb[0], ub[0])
ax.set_ylim(lb[1], ub[1])
# plt.subplots_adjust(bottom=0.17)
# plt.subplots_adjust(left=0.17)
plt.title('T')
ax.set_xlabel('$t$')
ax.set_ylabel('$x$')
plt.pcolormesh(np.reshape(X_sol_star[:, 0], (N0, -1)),
np.reshape(X_sol_star[:, 1], (N0, -1)),
np.reshape(u_pred[:, 0], (N0, -1)),
shading='gouraud', cmap='jet')
plt.colorbar()
plt.tight_layout()
# plt.legend()
fig.savefig('Figures\\Heat 2.png')
t = np.reshape(X_sol_star[:, 0], (N0, -1))
x = np.reshape(X_sol_star[:, 1], (N0, -1))
u = np.reshape(u_pred[:, 0], (N0, -1))
x_init = x[:, 0]
u_init = u[:, 0]
fig = plt.figure(figsize=(4*1.75,4), dpi=200)
ax = fig.gca()
ax.set_xlim(lb[1], ub[1])
ax.yaxis.grid(color='gainsboro', linestyle='dotted', linewidth=1.5)
ax.xaxis.grid(color='gainsboro', linestyle='dotted', linewidth=0.8)
ax.axhline(0,linestyle='dotted', color='grey')
ax.spines['right'].set_color('none')
ax.spines['top'].set_color('none')
plt.title('t = 0')
ax.set_xlabel('$x$')
ax.set_ylabel('$T$')
plt.tight_layout()
ln, = ax.plot(x_init, u_init)
def update(frame):
plt.title('t = {time:.2f}'.format(time = t[0, frame]))
ln.set_data(x[:, frame], u[:, frame])
ani = FuncAnimation(fig, update, list(range(0, N_b)))
ani.event_source.stop()
# anim.event_source.stop()
writer = PillowWriter(fps=25)
ani.save("Figures\\Heat 2.gif", writer=writer)
# ani.event_source.stop()
###Output
_____no_output_____ |
src/jupyter/detectedPuncta_csv2bild.ipynb | ###Markdown
Master Frame: Write a *.bild file to display the detected puncta in ChimeraX
###Code
filepath = path+'/'+outputDataFolder+'/'+master_outputDataFolder+'/puncta_01.csv'
detection_data = pd.read_csv(filepath,header=0)
detection_data.columns = ["x","y","z","A"]
print(len(detection_data))
detection_data[0:5]
print(filepath)
icx.pandasData2bildFile(detection_data,filepath+".bild")
###Output
168
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_01.csv
###Markdown
Movie: Write a *.bild file to display the detected puncta in ChimeraX
###Code
for i in range(1,movieLength+1):
filename = 'puncta_'+"{:02}".format(i) +'.csv'
filepath = path+'/'+outputDataFolder+'/'+master_outputDataFolder+'/'+filename
detection_data = pd.read_csv(filepath,header=0)
detection_data.columns = ["x","y","z","A"]
print("number of detections for {}: {}".format(filename,len(detection_data)))
icx.pandasData2bildFile(detection_data,filepath+".bild")
print(filepath+".bild")
###Output
number of detections for puncta_01.csv: 168
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_01.csv.bild
number of detections for puncta_02.csv: 169
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_02.csv.bild
number of detections for puncta_03.csv: 164
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_03.csv.bild
number of detections for puncta_04.csv: 143
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_04.csv.bild
number of detections for puncta_05.csv: 153
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_05.csv.bild
number of detections for puncta_06.csv: 151
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_06.csv.bild
number of detections for puncta_07.csv: 153
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_07.csv.bild
number of detections for puncta_08.csv: 148
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_08.csv.bild
number of detections for puncta_09.csv: 156
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_09.csv.bild
number of detections for puncta_10.csv: 160
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_560/puncta_10.csv.bild
###Markdown
Slave Movie: Write a *.bild file to display the detected puncta in ChimeraX
###Code
for i in range(1,movieLength+1):
filename = 'puncta_'+"{:02}".format(i) +'.csv'
filepath = path+'/'+outputDataFolder+'/'+slave_outputDataFolder+'/'+filename
detection_data = pd.read_csv(filepath,header=0)
detection_data.columns = ["x","y","z","A"]
print("number of detections for {}: {}".format(filename,len(detection_data)))
icx.pandasData2bildFile(detection_data,filepath+".bild")
print(filepath+".bild")
###Output
number of detections for puncta_01.csv: 168
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_01.csv.bild
number of detections for puncta_02.csv: 169
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_02.csv.bild
number of detections for puncta_03.csv: 164
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_03.csv.bild
number of detections for puncta_04.csv: 143
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_04.csv.bild
number of detections for puncta_05.csv: 153
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_05.csv.bild
number of detections for puncta_06.csv: 151
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_06.csv.bild
number of detections for puncta_07.csv: 153
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_07.csv.bild
number of detections for puncta_08.csv: 148
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_08.csv.bild
number of detections for puncta_09.csv: 156
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_09.csv.bild
number of detections for puncta_10.csv: 160
/Users/johannesschoeneberg/Dropbox/pylattice_testData/imaging_data//./output/output_488/puncta_10.csv.bild
|
gui/esempi-bqplot/Applications/Visualizing the US Elections.ipynb | ###Markdown
Visualizing the 2016 General Election Polls
###Code
from __future__ import print_function
import pandas as pd
import numpy as np
from ipywidgets import VBox, HBox
import os
codes = pd.read_csv(os.path.abspath('../data_files/state_codes.csv'))
try:
from pollster import Pollster
except ImportError:
print('Pollster not found. Installing Pollster..')
try:
import subprocess
subprocess.check_call(['pip', 'install', 'pollster==0.1.6'])
except:
print("The pip installation failed. Please manually install Pollster and re-run this notebook.")
def get_candidate_data(question):
clinton, trump, undecided, other = 0., 0., 0., 0.
for candidate in question['subpopulations'][0]['responses']:
if candidate['last_name'] == 'Clinton':
clinton = candidate['value']
elif candidate['last_name'] == 'Trump':
trump = candidate['value']
elif candidate['choice'] == 'Undecided':
undecided = candidate['value']
else:
other = candidate['value']
return clinton, trump, other, undecided
def get_row(question, partisan='Nonpartisan', end_date='2016-06-21'):
# if question['topic'] != '2016-president':
if ('2016' in question['topic']) and ('Presidential' in question['topic']):
hillary, donald, other, undecided = get_candidate_data(question)
return [{'Name': question['name'], 'Partisan': partisan, 'State': question['state'],
'Date': np.datetime64(end_date), 'Trump': donald, 'Clinton': hillary, 'Other': other,
'Undecided': undecided}]
else:
return
def analyze_polls(polls):
global data
for poll in polls:
for question in poll.questions:
resp = get_row(question, partisan=poll.partisan, end_date=poll.end_date)
if resp is not None:
data = data.append(resp)
return
try:
from pollster import Pollster
pollster = Pollster()
# Getting data from Pollster. This might take a second.
raw_data = pollster.charts(topic='2016-president')
data = pd.DataFrame(columns=['Name', 'Partisan', 'State', 'Date', 'Trump', 'Clinton', 'Other',
'Undecided'])
for i in raw_data:
analyze_polls(i.polls())
except:
raise ValueError('Please install Pollster and run the functions above')
def get_state_party(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
if polls.shape[0] == 0:
return None
if (polls.tail(1)['Trump'] > polls.tail(1)['Clinton']).values[0]:
return 'Republican'
else:
return 'Democrat'
def get_color_data():
color_data = {}
for i in codes['FIPS']:
color_data[i] = get_state_party(i)
return color_data
def get_state_data(code):
state = codes[codes['FIPS']==code]['USPS'].values[0]
if data[data['State']==state].shape[0] == 0:
return None
polls = data[(data['State']==state) & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
return polls
from bqplot import *
from ipywidgets import Layout
dt_x = DateScale()
sc_y = LinearScale()
time_series = Lines(scales={'x': dt_x, 'y': sc_y}, colors=['#E91D0E', '#2aa1ec'], marker='circle')
ax_x = Axis(scale=dt_x, label='Date')
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
ts_fig = Figure(marks=[time_series], axes=[ax_x, ax_y], title='General Election - State Polls',
layout=Layout(min_width='650px', min_height='400px'))
sc_geo = AlbersUSA()
sc_c1 = OrdinalColorScale(domain=['Democrat', 'Republican'], colors=['#2aa1ec', '#E91D0E'])
color_data = get_color_data()
map_styles = {'color': color_data,
'scales': {'projection': sc_geo, 'color': sc_c1}, 'colors': {'default_color': 'Grey'}}
axis = ColorAxis(scale=sc_c1)
states_map = Map(map_data=topo_load('map_data/USStatesMap.json'), tooltip=ts_fig, **map_styles)
map_fig = Figure(marks=[states_map], axes=[axis],title='General Election Polls - State Wise')
def hover_callback(name, value):
polls = get_state_data(value['data']['id'])
if polls is None or polls.shape[0] == 0:
time_series.y = [0.]
return
time_series.x, time_series.y = polls['Date'].values.astype(np.datetime64), [polls['Trump'].values, polls['Clinton'].values]
ts_fig.title = str(codes[codes['FIPS']==value['data']['id']]['Name'].values[0]) + ' Polls - Presidential Election'
states_map.on_hover(hover_callback)
national = data[(data['State']=='US') & (data['Trump'] > 0.) & (data['Clinton'] > 0.)].sort_values(by='Date')
dt_x = DateScale()
sc_y = LinearScale()
clinton_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Clinton'],
scales={'x': dt_x, 'y': sc_y},
colors=['#2aa1ec'])
trump_scatter = Scatter(x=national['Date'].values.astype(np.datetime64), y=national['Trump'],
scales={'x': dt_x, 'y': sc_y},
colors=['#E91D0E'])
ax_x = Axis(scale=dt_x, label='Date', tick_format='%b-%Y', num_ticks=8)
ax_y = Axis(scale=sc_y, orientation='vertical', label='Percentage')
scat_fig = Figure(marks=[clinton_scatter, trump_scatter], axes=[ax_x, ax_y], title='General Election - National Polls')
###Output
_____no_output_____
###Markdown
Hover on the map to visualize the poll data for that state.
###Code
VBox([map_fig, scat_fig])
###Output
_____no_output_____
###Markdown
Visualizing the County Results of the 2008 Elections
###Code
county_data = pd.read_csv(os.path.abspath('../data_files/2008-election-results.csv'))
winner = np.array(['McCain'] * county_data.shape[0])
winner[(county_data['Obama'] > county_data['McCain']).values] = 'Obama'
sc_geo_county = AlbersUSA()
sc_c1_county = OrdinalColorScale(domain=['McCain', 'Obama'], colors=['Red', 'DeepSkyBlue'])
color_data_county = dict(zip(county_data['FIPS'].values.astype(int), list(winner)))
map_styles_county = {'color': color_data_county,
'scales': {'projection': sc_geo_county, 'color': sc_c1_county}, 'colors': {'default_color': 'Grey'}}
axis_county = ColorAxis(scale=sc_c1_county)
county_map = Map(map_data=topo_load('map_data/USCountiesMap.json'), **map_styles_county)
county_fig = Figure(marks=[county_map], axes=[axis_county],title='US Elections 2008 - Example',
layout=Layout(min_width='800px', min_height='550px'))
names_sc = OrdinalScale(domain=['Obama', 'McCain'])
vote_sc_y = LinearScale(min=0, max=100.)
names_ax = Axis(scale=names_sc, label='Candidate')
vote_ax = Axis(scale=vote_sc_y, orientation='vertical', label='Percentage')
vote_bars = Bars(scales={'x': names_sc, 'y': vote_sc_y}, colors=['#2aa1ec', '#E91D0E'])
bar_fig = Figure(marks=[vote_bars], axes=[names_ax, vote_ax], title='Vote Margin',
layout=Layout(min_width='600px', min_height='400px'))
def county_hover(name, value):
if (county_data['FIPS'] == value['data']['id']).sum() == 0:
bar_fig.title = ''
vote_bars.y = [0., 0.]
return
votes = county_data[county_data['FIPS'] == value['data']['id']]
dem_vote = float(votes['Obama %'].values[0])
rep_vote = float(votes['McCain %'].values[0])
vote_bars.x, vote_bars.y = ['Obama', 'McCain'], [dem_vote, rep_vote]
bar_fig.title = 'Vote % - ' + value['data']['name']
county_map.on_hover(county_hover)
county_map.tooltip = bar_fig
###Output
_____no_output_____
###Markdown
Hover on the map to visualize the voting percentage for each candidate in that county
###Code
county_fig
###Output
_____no_output_____ |
practiceInstances/003MnistHandwritingRecognition/practice_003_mnist_handwriting_recognition.ipynb | ###Markdown
Bring in modules
###Code
import tensorflow as tf
print("tensorflow version", tf.__version__)
import sklearn
print("sklearn version", sklearn.__version__)
import numpy as np
print("numpy version", np.__version__)
###Output
tensorflow version 2.7.0
sklearn version 1.0.1
numpy version 1.20.1
###Markdown
Configs
###Code
# With numpy, when a value is printed display more values per line
# https://stackoverflow.com/questions/21971449/how-do-i-increase-the-cell-width-of-the-jupyter-ipython-notebook-in-my-browser
np.set_printoptions(linewidth=5000)
from IPython.core.display import display, HTML
display(HTML("<style>.container { width:100% !important; }</style>"))
# In Pandas, display more rows and columns
# https://stackoverflow.com/a/11711637/4375369
# pd.set_option('display.max_rows', 100)
# pd.set_option('display.max_columns', 100)
RANDOM_SEED_FOR_REPRODUCIBILITY = 777
###Output
_____no_output_____
###Markdown
Get raw data
###Code
(X_train_raw, y_train_raw), (X_test_raw, y_test_raw) = tf.keras.datasets.mnist.load_data()
# X_train_raw, y_train_raw, X_test_raw, y_test_raw
###Output
_____no_output_____
###Markdown
Shuffle raw data
###Code
# https://scikit-learn.org/stable/modules/generated/sklearn.utils.shuffle.html
X_train_raw, y_train_raw = sklearn.utils.shuffle(X_train_raw, y_train_raw, random_state=RANDOM_SEED_FOR_REPRODUCIBILITY)
X_test_raw, y_test_raw = sklearn.utils.shuffle(X_test_raw, y_test_raw, random_state=RANDOM_SEED_FOR_REPRODUCIBILITY)
###Output
_____no_output_____
###Markdown
Normalize example data
###Code
# https://www.tensorflow.org/api_docs/python/tf/math/reduce_max
maximum_value = tf.math.reduce_max(X_train_raw)
assert maximum_value == 255, "Maximum value is expected to be 255 but got {}".format(maximum_value)
X_train_normalized = X_train_raw / maximum_value
X_test_normalized = X_test_raw / maximum_value
###Output
_____no_output_____
###Markdown
One-hot encode label data
###Code
# https://www.tensorflow.org/api_docs/python/tf/keras/utils/to_categorical
y_train_one_hot_encoded = tf.keras.utils.to_categorical(y_train_raw)
y_test_one_hot_encoded = tf.keras.utils.to_categorical(y_test_raw)
###Output
_____no_output_____
###Markdown
Compare Raw to Modified
###Code
X_train_raw[0], X_train_normalized[0]
X_test_raw[0], X_test_normalized[0]
y_train_raw[0], y_train_one_hot_encoded[0]
y_test_raw[0], y_test_one_hot_encoded[0]
###Output
_____no_output_____
###Markdown
Expand dimensions of example data to be used by convolutional network
###Code
# https://www.tensorflow.org/api_docs/python/tf/expand_dims
X_train_expand_dims = tf.expand_dims(X_train_normalized, axis=-1)
X_test_expand_dims = tf.expand_dims(X_test_normalized, axis=-1)
X_train_normalized.ndim, X_train_expand_dims.ndim, X_test_normalized.ndim, X_test_expand_dims.ndim
###Output
_____no_output_____
###Markdown
Accept the data for use
###Code
X_train = X_train_expand_dims
y_train = y_train_one_hot_encoded
X_test = X_test_expand_dims
y_test = y_test_one_hot_encoded
###Output
_____no_output_____
###Markdown
Create the model architechture and train
###Code
tf.random.set_seed(RANDOM_SEED_FOR_REPRODUCIBILITY)
model_001 = tf.keras.Sequential([
tf.keras.layers.Conv2D(100, (3, 3), padding="same", activation=tf.keras.activations.relu),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(100, (3, 3), padding="same", activation=tf.keras.activations.relu),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Conv2D(100, (3, 3), padding="same", activation=tf.keras.activations.relu),
tf.keras.layers.MaxPooling2D((2, 2)),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(100, activation=tf.keras.activations.relu),
tf.keras.layers.Dense(100, activation=tf.keras.activations.relu),
tf.keras.layers.Dense(100, activation=tf.keras.activations.relu),
tf.keras.layers.Dense(10, activation=tf.keras.activations.softmax)
])
model_001.compile(
loss=tf.keras.losses.CategoricalCrossentropy(),
optimizer=tf.keras.optimizers.Adam(learning_rate=0.001),
metrics=[ "accuracy" ]
)
def learning_rate_schedule(epoch, current_learning_rate):
return current_learning_rate
model_001.fit(
X_train,
y_train,
epochs=20,
validation_data=(X_test, y_test),
callbacks=[
tf.keras.callbacks.LearningRateScheduler(learning_rate_schedule),
tf.keras.callbacks.EarlyStopping('val_accuracy', patience=5, restore_best_weights=True)
]
)
model_001.evaluate(X_test, y_test)
###Output
313/313 [==============================] - 3s 8ms/step - loss: 0.0220 - accuracy: 0.9945
|
movie_revenue.ipynb | ###Markdown
Analysis of movie revenue dataset Importing data and intial insights:
###Code
#import libraries
import matplotlib.pyplot as plt
import seaborn as sns
import pandas as pd
import numpy as np
from scipy import stats
import statsmodels.api as sm
from sklearn.linear_model import LinearRegression
from sklearn.tree import DecisionTreeRegressor
from sklearn.ensemble import RandomForestRegressor
from sklearn.model_selection import train_test_split
from sklearn.metrics import mean_absolute_error
import random
plt.style.use("ggplot")
#read in data
df = pd.read_csv("tmdb_5000_movies.csv")
#removing columns which are either useless or too vague
df.drop(["overview", "popularity", "homepage", "status", "original_title", "tagline"], axis=1, inplace = True)
#adding ratio column, removing inf and nan values
df["bdgt_2_rev"] = df["revenue"] / df["budget"]
df = df.replace([np.inf, -np.inf], np.nan).dropna()
#making over 100 tuimes your budget would be very few and far between and may skew data so we remove
df = df[df["bdgt_2_rev"] <= 100]
df.head()
#inital visualizations to look into data
fig = plt.figure()
#plotting budget against revenue
ax = fig.add_subplot(2, 2, 1)
plt.plot(df["budget"], df["revenue"], marker = ".", linestyle = "")
plt.xlabel("Film Budget")
plt.ylabel("Film Revenue")
#plotting average film score against revenue
ax = fig.add_subplot(2, 2, 2)
plt.plot(df["vote_average"], df["revenue"], marker = ".", linestyle = "")
plt.xlabel("Average Film Score")
plt.ylabel("Film Revenue")
#plotting runtime against revenue
ax = fig.add_subplot(2, 2, 3)
plt.plot(df["runtime"], df["revenue"], marker = ".", linestyle = "")
plt.xlabel("Runtime")
plt.ylabel("Film Revenue")
#plotting profit ratio against average film score
ax = fig.add_subplot(2, 2, 4)
plt.plot(df["bdgt_2_rev"], df["vote_average"], marker = ".", linestyle = "")
plt.xlabel("Budget to Revenue ratio")
plt.ylabel("Average Film Score")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Question: Can we predict movie revenue based off of budget, score and runtime?To predict this, we will take a closer look at the approximate relationships between movie revenue and some yet to be determined numerical predictors. Looking at useful predictors for our models
###Code
#look at all possible correlations in our dataframe
df.corr()
###Output
_____no_output_____
###Markdown
From this, it would appear that it is worth at least considering all variables except id(which could never have any non-coincedental correlation with budget anyways) and bdgt_2_rev which uses the budget values anyways.
###Code
"""
Function takes in a list and a number of bins and returns binned_means
Bin means is the mean value in a bin of width (list length/number of bins)
When plotted, this gives us a better idea of the shape of the relationship between variables
(difficult to tell with this many points on a graph so this gives us an idea of point density)
"""
def bin_points(exp_list, bin_nums):
#get length of list and length of bins
length = len(exp_list)
bin_len = length // bin_nums
#initialize empty list to return means
binned_means = []
for i in range(bin_nums):
#make (/reset) empty list of values in each bin
tmp_lst = []
for j in range(bin_len):
#index elements of input list and add to temp list
index = i * bin_len + j
tmp_lst.append(exp_list[index])
#get mean of temp list and add to main list
mean = sum(tmp_lst) / len(tmp_lst)
binned_means.append(mean)
return binned_means
#plotting binned predictor values against
rev_binned = bin_points(df["revenue"].tolist(), 40)
predictors = ["budget", "runtime", "vote_average", "vote_count"]
binned_pred = []
for lst in predictors:
binned = bin_points(df[lst].tolist(), 40)
binned_pred.append(binned)
fig = plt.figure()
#plotting budget against revenue
ax = fig.add_subplot(2, 2, 1)
plt.plot(binned_pred[0], rev_binned, marker = ".", linestyle = "")
plt.ylabel("Revenue")
plt.xlabel("Budget")
#plotting runtime score against revenue
ax = fig.add_subplot(2, 2, 2)
plt.plot(binned_pred[1], rev_binned, marker = ".", linestyle = "")
plt.ylabel("Revenue")
plt.xlabel("Runtime")
#plotting average vote against revenue
ax = fig.add_subplot(2, 2, 3)
plt.plot(binned_pred[2], rev_binned, marker = ".", linestyle = "")
plt.ylabel("Revenue")
plt.xlabel("Average Vote")
#plotting number of votes vote against revenue
ax = fig.add_subplot(2, 2, 4)
plt.plot(binned_pred[3], rev_binned, marker = ".", linestyle = "")
plt.ylabel("Revenue")
plt.xlabel("Numberof votes")
plt.tight_layout()
plt.show()
###Output
_____no_output_____
###Markdown
Method 1: Regression We have looked at possible predictors for reveue so we now start to fit a regression modelLooking at the relationships, we can see that budget and number of votes appear to have approximately linear relationships with revenue, runtime possibly has some sort of polynomic relationship and the average vote has no clear patterns and so we will no longer be using it as a predictor.Therfore, we now tranform runtime to see if we can obtain an approximately linear relationship.
###Code
#enlarged view of revenue vs runtime
plt.plot(binned_pred[1], rev_binned, marker = "o", linestyle = "")
plt.ylabel("Revenue")
plt.xlabel("Runtime")
###Output
_____no_output_____
###Markdown
To try and find an appropriate polynomial with which to modify our data, we will fit polynomials of increasing degrees using the numpy.polyfit function. We will then assess suitability based on correlation values.
###Code
#empty list for correlation under modification
corrvals = []
for i in range(1, 10):
#empty list to store modified values based off regressed polynomial
f_runtime = []
#find best fitting polynomial of degree n and substitute in all runtime values, storing vals in list.
coeffs = np.polyfit(df["runtime"], df["revenue"], deg = i)
for j in df["runtime"]:
x = np.polyval(coeffs, j)
f_runtime.append(x)
#make new dataframe to store
df2 = pd.DataFrame(df["revenue"])
df2["runtime"] = df["runtime"]
df2["mod_runtime"] = f_runtime
#calculating and storing revenue vs modified runtime correlation
correlations = df2.corr()
correlations = correlations.iloc[0, 2]
corrvals.append(correlations)
print(corrvals)
plt.plot(corrvals, marker = "o", linestyle = "")
plt.ylabel("R-squared value")
labels = np.linspace(1, 10, 10, dtype = "str")
plt.xticks(np.linspace(0, 9, 10), labels)
plt.xlabel("Polynomial Degree")
###Output
_____no_output_____
###Markdown
Therefore, we can see that the largest improvement (by far) happens between polynomials of degrees 4 and 5. This is also ideal is the polynomial we are using is not too heavily fitted (degree 5 polynomial for approximatly 3700 data points). As a result of this, wee will plug in all runtime values into this function and use the resultant values for our final model to give greater confidence.From now on, we will use the column f_runtime (f(runtime)) instead of runtime in our model.
###Code
#looking at our runtime funtion
coeffs = np.polyfit(df["runtime"], df["revenue"], deg = 5)
print(coeffs)
#making dataframe column for f(runtime)
f_runtime = []
for i in df["runtime"]:
x = np.polyval(coeffs, i)
f_runtime.append(x)
#adding f_runtime column to dataframe
df["f_runtime"] = f_runtime
#checking fixed runtime graph
binned_f_runtime = bin_points(df["f_runtime"].tolist(), 40)
plt.plot(binned_f_runtime, rev_binned, marker = "o", linestyle = "")
#making columns into lists for regression
budget = df["budget"].tolist()
vote_count = df["vote_count"].tolist()
revenue = df["revenue"].tolist()
#modifying lists for regression
y = revenue
x = [budget, vote_count, f_runtime]
#defining regression function and viewing model
def multi_lin_reg(y, x):
ones = np.ones(len(x[0]))
X = sm.add_constant(np.column_stack((x[0], ones)))
for i in x[1:]:
X = sm.add_constant(np.column_stack((i, X)))
results = sm.OLS(y, X).fit()
return results
multi_lin_reg(y, x).summary()
###Output
_____no_output_____
###Markdown
Method 2: Descision tree
###Code
#selecting x and y vaues for model
y = df.revenue
movie_features = ["budget", "runtime", "vote_average", "vote_count"]
X = df[movie_features]
#splitting data into training and testing
X_train, X_test, y_train, y_test = train_test_split(X, y, random_state=0)
#making model without improving parameters
movie_model_dt = DecisionTreeRegressor(random_state = 1)
movie_model_dt.fit(X_train, y_train)
pred_rev = movie_model_dt.predict(X_test)
standard_mae = mean_absolute_error(y_test, pred_rev)
print(standard_mae)
###Output
65745523.47486631
###Markdown
We can see this model could do with some improving so we try to optimize the number of leaf nodes as well as the depth of the tree.
###Code
#improving model
#define function to return mae for some number of leaf nodes and tree depth
def calc_mae(tree_depth, leaf_nodes, X_train, X_test, y_train, y_test):
temp_model = DecisionTreeRegressor(max_depth=tree_depth, max_leaf_nodes=leaf_nodes, random_state=0)
temp_model.fit(X_train, y_train)
temp_predictions = temp_model.predict(X_test)
mae = mean_absolute_error(y_test, temp_predictions)
return mae
#findng optimal values for number of leaf nodes and depth of tree
max_leaf_nodes_vals = np.linspace(5, 200, 195, dtype=int).tolist()
max_tree_depth_vals = np.linspace(2, 12, 10, dtype=int).tolist()
mae_vals = []
node_vals = []
depth_vals = []
for leaf_node in max_leaf_nodes_vals:
for depth in max_tree_depth_vals:
node_vals.append(leaf_node)
depth_vals.append(depth)
mae = calc_mae(depth, leaf_node, X_train, X_test, y_train, y_test)
mae_vals.append(mae)
#find min mae and corresponding model parameters which produce it
min_mae = min(mae_vals)
index = mae_vals.index(min_mae)
optimal_leaves = node_vals[index]
optimal_depth = depth_vals[index]
print(f"An optimal mean absolute error of {min_mae} is produced when: \n- There are no more than {optimal_leaves} leaf nodes \n- The tree has depth of no more than {optimal_depth}")
#final model
movie_model_dt = DecisionTreeRegressor(max_depth = optimal_depth, random_state = 1, max_leaf_nodes = optimal_leaves)
movie_model_dt.fit(X_train, y_train)
pred_rev = movie_model_dt.predict(X_test)
mad = mean_absolute_error(y_test, pred_rev)
print(mad)
###Output
51137260.06882424
|
Modulo1/.ipynb_checkpoints/3. Tipo de Datos, operadores y variables-checkpoint.ipynb | ###Markdown
VARIABLES, TIPOS DE DATOS SIMPLES Y OPERADORES En programación este apartado aprenderas a utilizar los diferentes tipos de datos que se encuentran disponibles en python, asi como declarar variables y realizar operaciones. 1. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. **La variable me permite almacenar en la memoria de la computadora un valor (numero, texto, etc)** 1.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 1.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
## 1. Declaremos una variable para mejorar que nos ayude a enviar un mensaje al usuario
print('Hola Mundo')
## 2. Declarando una variable, la cual contiene un texto
mensaje = "Hola Mundo"
print(mensaje)
###Output
Hola Mundo
###Markdown
Ejercicio Cree una variable llamada msg la cual almacene un mensaje. Luego imprima dicho mensaje
###Code
msg = 'hola a todos!!'
print(msg)
###Output
hola a todos!!
###Markdown
2. Tipo de Datos En la programación todo se resume a datos que representan información. Dicha información en terminos simples la podemos ver como:- Cadenas de texto o strings- Números- fechas- imágenes- sonidos- vídeos- Etc. 2.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
dasdas
"""
print('hola\na todos')
###Output
hola
a todos
###Markdown
- Almacenando cadenas en variables
###Code
# Recuerda que podemos utilizar variables para almacenar una cadena de texto
texto = "Este es un texto"
###Output
_____no_output_____
###Markdown
Algunas Operaciones Basicas de Cadena Una vez que ya conocemos que es una cadena veamos algunas operaciones básicas con las cadenas de texto
###Code
cad1 = "Hola"
cad2 = "Mundo"
## Concadenar dos cadenas de texto
print("Un divertido "+"programa "+"de "+ "radio")
## Usando variables
# Esto es igual a : "HolaMundo"
cad1 = "Un divertido "
cad2 = "programa"
print(cad1 + cad2)
## Multiplicar una cadena
print('Hola')
print('Hola'*3)
## Conocer largo de una cadena
len('Hola') # cadena de 4 caracteres
len(cad1)
## Identificando el tipo de dato
y = 'h'
type(y)
## Convirtiendo a string
str(3)
###Output
_____no_output_____
###Markdown
2.2.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Int -> Valores enteros
3
x = 5
print(x)
# Float -> Representan valores con coma decimal
3.14
y = 8.17
print(y)
###Output
8.17
###Markdown
Identificando los tipos de datos
###Code
type(8)
p = 12.34234
type(p)
###Output
_____no_output_____
###Markdown
Operaciones sobre números Representan el conjunto de operaciones básicas de tipo numéricas
###Code
## Indistintamente si un numero es "int" or "float" es posible realizar operaciones sobre ellos
numero_1 = 12
numero_2 = 8.5
7 // 3 # solo muestra la parte entera de la división
# sumando
print(numero_1 + numero_2)
# Restando
print(numero_1 - numero_2)
# Multiplicando: 12 * 2 = 24
print(numero_1 * 2)
# Diviendo : 12 / 2 = 6
print(numero_1 / 2)
# Potencia : 12 ** 2 = 144
print(numero_1 ** 2)
## Módulo de un número -> Me brinda el residuo de la división entre dos numeros
## 7 = 3 * 2 + 1 (1 es el residuo de la división)
7 % 3 # -> 3 no es divisor de 7
###Output
_____no_output_____
###Markdown
Otras Operaciones Básicas
###Code
int(3.55) # int() -> convierte otro tipo de dato a entero
int('3') # Convierto String a entero
float(3) # Convierto "int" a float
float('3.245') # Convierto "String" a float
###Output
_____no_output_____
###Markdown
Ejercicios 1. Escribir un programa que realice la siguiente operación aritmética (Usar variables) $$ ({3 +}\frac{2}{2 x 5}) {^2}$$
###Code
a = 3
b = 2 / (2 * 5)
r = (a + b) ** 2
print(int(r))
###Output
10
###Markdown
2. Calcular la Fuerza en Newtons de un cuerpo con masa m = 4 kg y una aceleracion a= 3 m/s \begin{equation} F = m * a\end{equation}
###Code
# 1. recupero valores de m y a
m = 4
a = 3
# 2. realizo el calculo
f = m * a
# 3. muetro en pantalla
print(f)
###Output
12
###Markdown
2.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
True
# cualquier número excepto el 0 se interpreta como verdadero
bool(1)
False
bool(0)
## Conociendo el tipo de dato
type(True)
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a = 3
b ='s'
# Comparando a y b
a == b
a != b
c = 5
# Mayor que
a > c
's' > 5
###Output
_____no_output_____
###Markdown
**Cuidado:**Tener en cuenta lo siguiente para implementar sus lógicas 3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
Ejercicios 1. Expresión a evaluar: 2 > ppara el valor p = 5
###Code
p = 5
2 > p # 2 no es mayor que 5
###Output
_____no_output_____
###Markdown
2. Expresión a evaluar: 8 == b and not b < 0para el valor b = 0
###Code
b = 0
(8 == b) and not(b<0)
True or not(True)
###Output
_____no_output_____
###Markdown
Notas Adicionales
###Code
# Jupyter Notebook no es necesario poner print
numero =3
numero
## Para este caso si es necesario el print
numero = 7
numero
x = 2
## El valor que quiera imprimir debe estar al final de todo
print(numero)
###Output
7
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto) # reasingacion de variable
type(numero_texto)
###Output
_____no_output_____
###Markdown
Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a + 2
print(a)
# Aumento dos a la variable a (a = a+2)
a += 2
a
# a = a * 10
a *= 10
a
###Output
_____no_output_____
###Markdown
TIPOS DE DATOS, OPERADORES Y VARIBLES EN PYTHON 1. Tipo de Datos En la programación todo se resume a datos que representan información. - Números- textos- fechas- imágenes- sonidos- vídeos- Etc. 1.1.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Entero
3
int(3.55) # int() -> convierte otro tipo de dato a entero
int('3')
# float
12.3545
float(3)
float('3.245')
###Output
_____no_output_____
###Markdown
1.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
"""
###Output
_____no_output_____
###Markdown
kjankasnfkjanfansjfhasnjfas 1.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
bool(1)
bool(0)
False
True
###Output
_____no_output_____
###Markdown
Conociendo el Tipo de Dato
###Code
type('hola')
type(3.45)
###Output
_____no_output_____
###Markdown
2. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. La variable me permite almacenar en la memoria de la computadora un valor (numero, texto, etc) 2.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 2.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
# declaración de variable en python
numero = 3.14
numero
# Jupyter Notebook no es necesario poner print
numero
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto)
type(numero_texto)
###Output
_____no_output_____
###Markdown
3. Operadores 3.1 Operadores Aritméticos Representan el conjunto de operaciones básicas de tipo numéricas Ejemplos
###Code
# Suma
3+2
# Resta
a = 5.2
b= 3.2
a - b
# Modulo (Nos brinda la parte entera que no puede ser dividida entre dos números)
a=3
b=2
a%b
'texto'+' '+'hola'
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a= 3
b='s'
# Comparando a y b
a==b
a != b
c= 5
# Mayor que
a > c
###Output
_____no_output_____
###Markdown
3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
###Output
_____no_output_____
###Markdown
3.4 Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
# Defino a como 5
a = 5
a = a + 2
a
# Aumento dos a la variable a (a = a+2)
a *= 2
a
###Output
_____no_output_____
###Markdown
EJERCICIOS 1.Escribir un programa que almacene la cadena ¡Hola Mundo! en una variable y luego muestre por pantalla el contenido de la variable.
###Code
cadena = '¡Hola Mundo!'
print(cadena)
###Output
¡Hola Mundo!
###Markdown
2.Escribir un programa que realice la siguiente operación aritmética (3+2 / 2x5)2 .
###Code
operacion = ((3+2) / (2*5))**2
print(operacion)
###Output
0.25
###Markdown
VARIABLES, TIPOS DE DATOS SIMPLES Y OPERADORES En programación este apartado aprenderas a utilizar los diferentes tipos de datos que se encuentran disponibles en python, asi como declarar variables y realizar operaciones. 1. Variables Una variable es un identificador que representa un espacio en la memoria. A este espacio se le puede asignar un valor para utilizarlo posteriormente como si se tratase de un valor literal, incluso se puede operar con otras variables y reasignarle otro valor en cualquier momento. **La variable me permite almacenar en la memoria de la computadora un valor (numero, texto, etc)** 1.1 Declaración de variables en otros lenguajes 1. Se declara la variable con su tipo de dato2. Se asigna un valor a la variable 1.2 Declaración de variables en Python 1. Se asigna un valor a una variable (no es necesario declarar tipo de dato)2. Python por defecto interpreta el tipo de dato según el valor recibido
###Code
## 1. Declaremos una variable para mejorar que nos ayude a enviar un mensaje al usuario
print('Hola Mundo')
variable1=2
type(variable1)
variable2="hola"
type(variable2)
## 2. Declarando una variable, la cual contiene un texto
mensaje = "Hola Mundo"
print(mensaje)
###Output
_____no_output_____
###Markdown
Ejercicio Cree una variable llamada msg la cual almacene un mensaje. Luego imprima dicho mensaje
###Code
msg=""
msg=input("ingrese un mensaje")
print(msg)
###Output
###Markdown
2. Tipo de Datos En la programación todo se resume a datos que representan información. Dicha información en terminos simples la podemos ver como:- Cadenas de texto o strings- Números- fechas- imágenes- sonidos- vídeos- Etc. 2.2 Tipo de Dato texto Inmediatamente después de los números hay que echar un vistazo a las cadenas de texto, a fin de cuentas es la forma como las personas nos comunicamos de forma escrita. Las letras o caracteres son en definitiva símbolos de escritura y otro tipo de dato esencial.Siempre se definen entre comillas simples o dobles:
###Code
# Usando comillas simples
'Esta es una cadena de texto'
# Cadena usando comillas dobles
"Esta es otra cadena de texto"
# Cadena de más de una linea
"""
Esta cadena tiene más de una línea
por lo que se usa
3 comillas dobles
dasdas
"""
###Output
_____no_output_____
###Markdown
- Almacenando cadenas en variables
###Code
# Recuerda que podemos utilizar variables para almacenar una cadena de texto
texto = "Este es un texto"
###Output
_____no_output_____
###Markdown
Algunas Operaciones Basicas de Cadena Una vez que ya conocemos que es una cadena veamos algunas operaciones básicas con las cadenas de texto
###Code
cad1 = "Hola"
cad2 = "Mundo"
## Concadenar dos cadenas de texto
print("Un divertido "+"programa "+"de "+ "radio")
## Usando variables
# Esto es igual a : "HolaMundo"
cad1 = "Un divertido "
cad2 = "programa"
print(cad1 + cad2)
## Multiplicar una cadena
print('Hola')
print('Hola'*3)
## Conocer largo de una cadena
len('Holaaaaa ') # cadena de 4 caracteres
## Identificando el tipo de dato
y = 11
type(y)
## Convirtiendo a string
str(3)
###Output
_____no_output_____
###Markdown
2.2.Tipos de Datos Numéricos Representan valores numéricos.- Enteros (int): Representan números enteros- Decimales (float): Representan valores decimales
###Code
# Int -> Valores enteros
3
x = 5
print(x)
# Float -> Representan valores con coma decimal
3.14
y = 8.17
print(y)
###Output
_____no_output_____
###Markdown
Identificando los tipos de datos
###Code
type(8)
p = 12.34234
type(p)
###Output
_____no_output_____
###Markdown
Operaciones sobre números Representan el conjunto de operaciones básicas de tipo numéricas
###Code
## Indistintamente si un numero es "int" or "float" es posible realizar operaciones sobre ellos
numero_1 = 12 ##entero
numero_2 = 8.54 ## flotante
# sumando
print(numero_1 + numero_2)
# Restando
print(numero_1 - numero_2)
# Multiplicando: 12 * 2 = 24
print(numero_1 * 2)
# Diviendo : 12 / 2 = 6
print(int(numero_1 / 2))
# Potencia : 12 ** 2 = 144
print(numero_1 ** 2)
## Módulo de un número -> Me brinda el residuo de la división entre dos numeros
## 7 = 3 * 2 + 1 (1 es el residuo de la división)
7 % 3
###Output
_____no_output_____
###Markdown
Otras Operaciones Básicas
###Code
int(3.55) # int() -> convierte otro tipo de dato a entero
int('3') # Convierto String a entero
float(3) # Convierto "int" a float
float('3.245') # Convierto "String" a float
###Output
_____no_output_____
###Markdown
Ejercicios 1. Escribir un programa que realice la siguiente operación aritmética (Usar variables) $$ ({3 +}\frac{2}{2 x 5}) {^2}$$
###Code
aritmetica=(3+(2/2*5))**2
aritmetica
###Output
_____no_output_____
###Markdown
2. Calcular la Fuerza en Newtons de un cuerpo con masa m = 4 kg y una aceleracion a= 3 m/s \begin{equation} F = m * a\end{equation}
###Code
masa=4
aceleracion=3
fuerza=masa*aceleracion
fuerza
###Output
_____no_output_____
###Markdown
2.3 Tipo de Datos Booleanos Definen un valor de Verdadero o Falso según sea el casoSe representa: - verdadero = True- falso = False
###Code
True
cargado=False
a=10
b=2
a==b
# cualquier número excepto el 0 se interpreta como verdadero
bool("")
False
bool(0)
## Conociendo el tipo de dato
type(True)
###Output
_____no_output_____
###Markdown
3.2 Operadores Relacionales (De comparación) Sirven para comparar dos valores, dependiendo del resultado de la comparación se devolverá:- Verdadero (True), si es cierta- Falso (False), si no es cierta
###Code
# Comparando dos valores
a = 3
b ='s'
# Comparando a y b
a == b
a != b
c = 5
# Mayor que
a > c
b=10
a=2
((a>c) and (b>c))
###Output
_____no_output_____
###Markdown
**Cuidado:**Tener en cuenta lo siguiente para implementar sus lógicas 3.3 Operadores Lógicos Encontramos 3 operadores especiales para realizar operaciones lógicas. Normalmente se utilizan para agrupar, excluir y negar expresiones. Puede ayudar echar un vistazo a esta explicación sobre las tablas de la verdad:- Not- And- Or
###Code
# Hallando el valor de verdad de la siguiente expresión
(9 < 12) and (12 > 7)
##edad,peso,tama,imc
##if((edad>10)and(peso<=45) and not (imc<=21)): ## true o false
## print("usted esta bien de salud ")
###Output
_____no_output_____
###Markdown
Ejercicios 1. Expresión a evaluar: 2 > ppara el valor p = 5
###Code
p=5
2>p
###Output
_____no_output_____
###Markdown
2. Expresión a evaluar: 8 == b and not b < 0para el valor b = 0
###Code
b=0
(8==b) and not (b<0)
###Output
_____no_output_____
###Markdown
Notas Adicionales
###Code
# Jupyter Notebook no es necesario poner print
numero =3
numero
## Para este caso si es necesario el print
numero = 7
numero
x = 2
## El valor que quiera imprimir debe estar al final de todo
print(numero)
###Output
_____no_output_____
###Markdown
Funcion Type() Nos ayudará a conocer el tipo de dato de la variable
###Code
# Conociendo el tipo de dato de la variable número
type(numero)
# Pasando un numero como texto
numero_texto ='3.14'
numero_texto = float(numero_texto) # reasingacion de variable
type(numero_texto)
###Output
_____no_output_____
###Markdown
Operadores de Asignación Permiten la asignación de nuevos valores a las variables de forma rápida Suma en asignaciónNos ayuda sumar un número dado a la variable de forma rápida
###Code
#### estructura
## definimos las variables
## asignacion directa y/o asignacion por input
## ejecucion de sentecias para transformar mis variables sus dato a informacion
## resultado
# Defino a como 5
a = 5
a = a + 2
##valor nuevo=valor antiguo + lo nuevo
print(a)
# Aumento dos a la variable a (a = a+2)
a += 2
a
a
# a = a * 10
a/= 10
###Output
_____no_output_____ |
notebooks/S10D_MCMC.ipynb | ###Markdown
Metropolis and Gibbs Sampling====
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import seaborn as sns
from functools import partial
sns.set_context('notebook', font_scale=1.5)
###Output
_____no_output_____
###Markdown
Introduction to MCMC In regular Markov chain models, we are usually interested in finding the equilibrium distribution $T$ $\pi$ at whihc $\pi^T T = \pi^T$ for a given transition kernel $T$.MCMC inverts this thinking - we fix the equilibrium distribution to be the posterior distribution$$p(\theta \mid X) = \frac{p(X \mid \theta) \, p(\theta \mid \alpha)}{\int{p(X \mid \theta) \, p(\theta \mid \alpha) d\theta}}$$and look for a transition kernel that will converge to this equilibrium distribution. Island hoppingWe first provide an example to show the mechanics of the Metropolis algorithm concretely, then explore why it works. [Kruschke's book](https://sites.google.com/site/doingbayesiandataanalysis/) begins with a fun example of a politician visiting a chain of islands to canvas support - being callow, the politician uses a simple rule to determine which island to visit next. Each day, the politician chooses a neighboring island and compares the populations there with the population of the current island. If the neighboring island has a larger population, the politician goes over. If the neighboring island has a smaller population, then the politician visits with probability $p = p_\text{neighbor} / p_\text{current}$; otherwise the politician stays on the same island. After doing this for many days, the politician will end up spending time on each island proportional to the population of each island - in other words, estimating the distribution of island populations correctly. How a simple comparison of only two states at a time can lead to accurate estimation of a probability density is the topic of the next few lectures.
###Code
def make_islands(n, low=10, high=101):
islands = np.random.randint(low, high, n+2)
islands[0] = 0
islands[-1] = 0
return islands
def hop(islands, start=1, niter=1000):
pos = start
pop = islands[pos]
thetas = np.zeros(niter+1, dtype='int')
thetas[0] = pos
for i in range(niter):
# generate sample from proposal distribution
k = np.random.choice([-1, 1], 1)
next_pos = pos + k
# evaluate unnormalized target distribution at proposed position
next_pop = islands[next_pos]
# calculate acceptance probability
p = min(1, next_pop/pop)
# use uniform random to decide accept/reject proposal
if np.random.random() < p:
pos = next_pos
pop = next_pop
thetas[i+1] = pos
return thetas
islands = make_islands(10)
thetas = hop(islands, start=1, niter=10000)
###Output
_____no_output_____
###Markdown
True population proportions
###Code
data = islands[1:-1]
data = data/data.sum()
sns.barplot(x=np.arange(len(data)), y=data)
pass
###Output
_____no_output_____
###Markdown
Estimated population proportions
###Code
data = np.bincount(thetas)[1:]
data = data/data.sum()
sns.barplot(x=np.arange(len(data)), y=data)
pass
###Output
_____no_output_____
###Markdown
Generic Metropolis scheme
###Code
def metroplis(start, target, proposal, niter, nburn=0):
current = start
post = [current]
for i in range(niter):
proposed = proposal(current)
p = min(target(proposed)/target(current), 1)
if np.random.random() < p:
current = proposed
post.append(current)
return post[nburn:]
###Output
_____no_output_____
###Markdown
Apply to island hooper
###Code
target = lambda x: islands[x]
proposal = lambda x: x + np.random.choice([-1, 1])
post = metroplis(1, target, proposal, 2000)
data = np.bincount(post)[1:]
data = data/data.sum()
sns.barplot(x=np.arange(len(data)), y=data)
pass
###Output
_____no_output_____
###Markdown
Bayesian Data Analysis----The fundamental objective of Bayesian data analysis is to determine the posterior distribution$$p(\theta \ | \ X) = \frac{p(X \ | \ \theta) p(\theta)}{p(X)}$$where the denominator is$$p(X) = \int d\theta^* p(X \ | \ \theta^*) p(\theta^*) $$Here, - $p(X \ | \ \theta)$ is the likelihood, - $p(\theta)$ is the prior and - $p(X)$ is a normalizing constant also known as the evidence or marginal likelihoodThe computational issue is the difficulty of evaluating the integral in the denominator. There are many ways to address this difficulty, including:- In cases with conjugate priors (with conjugate priors, the posterior has the same distribution family as the prior), we can get closed form solutions- We can use numerical integration- We can approximate the functions used to calculate the posterior with simpler functions and show that the resulting approximate posterior is "close" to true posterior (variational Bayes)- We can use Monte Carlo methods, of which the most important is Markov Chain Monte Carlo (MCMC). In simple Monte Carlo inegration, we want to estimate the integral $f(x) \, p(x) dx$. Wtih Bayesian models, the distribution $p(x)$ in the integral is the posterior$$p(x) = p(\theta \ | \ X) = \frac{p(X \ | \ \theta) p(\theta)}{\int d\theta^* p(X \ | \ \theta^*) p(\theta^*) }$$- MCMC allows to sample from the posterior distribution - the samples will not be independent unlike simple Monte Carlo integration, but this is OK as we can compensate for the auto-correlation by drawing a larger number of samples. Motivating exampleWe will use the toy example of estimating the bias of a coin given a sample consisting of $n$ tosses to illustrate a few of the approaches. Analytical solutionIf we use a beta distribution as the prior, then the posterior distribution has a closed form solution. This is shown in the example below. Some general points:- We need to choose a prior distribution family (i.e. the beta here) as well as its parameters (here a=10, b=10) - The prior distribution may be relatively uninformative (i.e. more flat) or informative (i.e. more peaked)- The posterior depends on both the prior and the data - As the amount of data becomes large, the posterior approximates the MLE - An informative prior takes more data to shift than an uninformative one- Of course, it is also important the model used (i.e. the likelihood) is appropriate for the fitting the data- The mode of the posterior distribution is known as the maximum a posteriori (MAP) estimate (cf MLE which is the mode of the likelihood)
###Code
import scipy.stats as stats
n = 100
h = 61
p = h/n
rv = stats.binom(n, p)
mu = rv.mean()
a, b = 10, 10
prior = stats.beta(a, b)
post = stats.beta(h+a, n-h+b)
ci = post.interval(0.95)
thetas = np.linspace(0, 1, 200)
plt.plot(thetas, prior.pdf(thetas), label='Prior', c='blue')
plt.plot(thetas, post.pdf(thetas), label='Posterior', c='red')
plt.plot(thetas, n*stats.binom(n, thetas).pmf(h), label='Likelihood', c='green')
plt.axvline((h+a-1)/(n+a+b-2), c='red', linestyle='dashed', alpha=0.4, label='MAP')
plt.axvline(mu/n, c='green', linestyle='dashed', alpha=0.4, label='MLE')
plt.xlim([0, 1])
plt.axhline(0.3, ci[0], ci[1], c='black', linewidth=2, label='95% CI');
plt.xlabel(r'$\theta$', fontsize=14)
plt.ylabel('Density', fontsize=16)
plt.legend(loc='upper left')
pass
###Output
_____no_output_____
###Markdown
Numerical integrationOne simple way of numerical integration is to estimate the values on a grid of values for $\theta$. To calculate the posterior, we find the prior and the likelihood for each value of $\theta$, and for the marginal likelihood, we replace the integral with the equivalent sum$$p(X) = \sum_{\theta^*} p(X | \theta^*) p(\theta^*) $$One advantage of this is that the prior does not have to be conjugate (although the example below uses the same beta prior for ease of comparison), and so we are not restricted in our choice of an appropriate prior distribution. For example, the prior can be a mixture distribution or estimated empirically from data. The disadvantage, of course, is that this is computationally very expensive when we need to estimate multiple parameters, since the number of grid points grows as $\mathcal{O}(n^d)$, where $n$ defines the grid resolution and $d$ is the size of $\theta$.
###Code
thetas = np.linspace(0, 1, 200)
prior = stats.beta(a, b)
post = prior.pdf(thetas) * stats.binom(n, thetas).pmf(h)
# Normalzie so volume is 1
post /= (post.sum() / len(thetas))
plt.plot(thetas, prior.pdf(thetas), label='Prior', c='blue')
plt.plot(thetas, n*stats.binom(n, thetas).pmf(h), label='Likelihood', c='green')
plt.plot(thetas, post, label='Posterior', c='red')
plt.xlim([0, 1])
plt.xlabel(r'$\theta$', fontsize=14)
plt.ylabel('Density', fontsize=16)
plt.legend()
pass
###Output
_____no_output_____
###Markdown
Markov Chain Monte Carlo (MCMC)This lecture will only cover the basic ideas of MCMC and the 3 common variants - Metroplis, Metropolis-Hastings and Gibbs sampling. All code will be built from the ground up to illustrate what is involved in fitting an MCMC model, but only toy examples will be shown since the goal is conceptual understanding. More realistic computational examples will be shown in coming lectures using the `pymc3` and `pystan` packages.In Bayesian statistics, we want to estimate the posterior distribution, but this is often intractable due to the high-dimensional integral in the denominator (marginal likelihood). A few other ideas we have encountered that are also relevant here are Monte Carlo integration with independent samples and the use of proposal distributions (e.g. rejection and importance sampling). As we have seen from the Monte Carlo integration lectures, we can approximate the posterior $p(\theta | X)$ if we can somehow draw many samples that come from the posterior distribution. With vanilla Monte Carlo integration, we need the samples to be independent draws from the posterior distribution, which is a problem if we do not actually know what the posterior distribution is (because we cannot integrate the marginal likelihood). With MCMC, we draw samples from a (simple) proposal distribution so that each draw depends only on the state of the previous draw (i.e. the samples form a Markov chain). Under certain conditions, the Markov chain will have a unique stationary distribution. In addition, not all samples are used - instead we set up acceptance criteria for each draw based on comparing successive states with respect to a target distribution that ensure that the stationary distribution is the posterior distribution of interest. The nice thing is that this target distribution only needs to be proportional to the posterior distribution, which means we don't need to evaluate the potentially intractable marginal likelihood, which is just a normalizing constant. We can find such a target distribution easily, since `posterior` $\propto$ `likelihood` $\times$ `prior`. After some time, the Markov chain of accepted draws will converge to the stationary distribution, and we can use those samples as (correlated) draws from the posterior distribution, and find functions of the posterior distribution in the same way as for vanilla Monte Carlo integration. There are several flavors of MCMC, but the simplest to understand is the Metropolis-Hastings random walk algorithm, and we will start there. Metropolis-Hastings random walk algorithm for estimating the bias of a coinTo carry out the Metropolis-Hastings algorithm, we need to draw random samples from the following distributions- the standard uniform distribution- a proposal distribution $p(x)$ that we choose to be $\mathcal{N}(0, \sigma)$- the target distribution $g(x)$ which is proportional to the posterior probabilityGiven an initial guess for $\theta$ with positive probability of being drawn, the Metropolis-Hastings algorithm proceeds as follows- Choose a new proposed value ($\theta_p$) such that $\theta_p = \theta + \Delta\theta$ where $\Delta \theta \sim \mathcal{N}(0, \sigma)$- Caluculate the ratio$$\rho = \frac{g(\theta_p \ | \ X)}{g(\theta \ | \ X)} $$where $g$ is the posterior probability. - If the proposal distribution is not symmetrical, we need to weight the acceptance probability to maintain detailed balance (reversibility) of the stationary distribution, and instead calculate$$\rho = \frac{g(\theta_p \ | \ X) p(\theta \ | \ \theta_p)}{g(\theta \ | \ X) p(\theta_p \ | \ \theta)} $$Since we are taking ratios, the denominator cancels any distribution proportional to $g$ will also work - so we can use $$\rho = \frac{p(X | \theta_p ) p(\theta_p)}{p(X | \theta ) p(\theta)}$$ - If $\rho \ge 1$, then set $\theta = \theta_p$- If $\rho \lt 1$, then set $\theta = \theta_p$ with probability $\rho$, otherwise set $\theta = \theta$ (this is where we use the standard uniform distribution)- Repeat the earlier stepsAfter some number of iterations $k$, the samples $\theta_{k+1}, \theta_{k+2}, \dots$ will be samples from the posterior distributions. Here are initial concepts to help your intuition about why this is so:- We accept a proposed move to $\theta_{k+1}$ whenever the density of the (unnormalized) target distribution at $\theta_{k+1}$ is larger than the value of $\theta_k$ - so $\theta$ will more often be found in places where the target distribution is denser- If this was all we accepted, $\theta$ would get stuck at a local mode of the target distribution, so we also accept occasional moves to lower density regions - it turns out that the correct probability of doing so is given by the ratio $\rho$- The acceptance criteria only looks at ratios of the target distribution, so the denominator cancels out and does not matter - that is why we only need samples from a distribution proportional to the posterior distribution- So, $\theta$ will be expected to bounce around in such a way that its spends its time in places proportional to the density of the posterior distribution - that is, $\theta$ is a draw from the posterior distribution. Additional notes:Different proposal distributions can be used for Metropolis-Hastings:- The independence sampler uses a proposal distribution that is independent of the current value of $\theta$. In this case the proposal distribution needs to be similar to the posterior distribution for efficiency, while ensuring that the acceptance ratio is bounded in the tail region of the posterior.- The random walk sampler (used in this example) takes a random step centered at the current value of $\theta$ - efficiency is a trade-off between small step size with high probability of acceptance and large step sizes with low probability of acceptance. Note (picture will be sketched in class) that the random walk may take a long time to traverse narrow regions of the probability distribution. Changing the step size (e.g. scaling $\Sigma$ for a multivariate normal proposal distribution) so that a target proportion of proposals are accepted is known as *tuning*.- Much research is being conducted on different proposal distributions for efficient sampling of the posterior distribution.We will first see a numerical example and then try to understand why it works.
###Code
def target(lik, prior, n, h, theta):
if theta < 0 or theta > 1:
return 0
else:
return lik(n, theta).pmf(h)*prior.pdf(theta)
n = 100
h = 61
a = 10
b = 10
lik = stats.binom
prior = stats.beta(a, b)
sigma = 0.3
naccept = 0
theta = 0.1
niters = 10000
samples = np.zeros(niters+1)
samples[0] = theta
for i in range(niters):
theta_p = theta + stats.norm(0, sigma).rvs()
rho = min(1, target(lik, prior, n, h, theta_p)/target(lik, prior, n, h, theta ))
u = np.random.uniform()
if u < rho:
naccept += 1
theta = theta_p
samples[i+1] = theta
nmcmc = len(samples)//2
print("Efficiency = ", naccept/niters)
post = stats.beta(h+a, n-h+b)
plt.hist(samples[nmcmc:], 40, histtype='step', normed=True, linewidth=1, label='Prior');
plt.hist(prior.rvs(nmcmc), 40, histtype='step', normed=True, linewidth=1, label='Posterior');
plt.plot(thetas, post.pdf(thetas), c='red', linestyle='--', alpha=0.5, label='True posterior')
plt.xlim([0,1]);
plt.legend(loc='upper left')
pass
###Output
_____no_output_____
###Markdown
Assessing for convergenceTrace plots are often used to informally assess for stochastic convergence. Rigorous demonstration of convergence is an unsolved problem, but simple ideas such as running multiple chains and checking that they are converging to similar distributions are often employed in practice.
###Code
def mh_coin(niters, n, h, theta, lik, prior, sigma):
samples = [theta]
while len(samples) < niters:
theta_p = theta + stats.norm(0, sigma).rvs()
rho = min(1, target(lik, prior, n, h, theta_p)/target(lik, prior, n, h, theta ))
u = np.random.uniform()
if u < rho:
theta = theta_p
samples.append(theta)
return samples
n = 100
h = 61
lik = stats.binom
prior = stats.beta(a, b)
sigma = 0.05
niters = 100
sampless = [mh_coin(niters, n, h, theta, lik, prior, sigma) for theta in np.arange(0.1, 1, 0.2)]
# Convergence of multiple chains
for samples in sampless:
plt.plot(samples, '-o')
plt.xlim([0, niters])
plt.ylim([0, 1]);
###Output
_____no_output_____
###Markdown
Why does Metropolis-Hastings work?There are two main ideas - first that the samples generated by MCMC constitute a Markov chain, and that this Markov chain has a unique stationary distribution that is always reached if we generate a very large number of samples. The second idea is to show that this stationary distribution is exactly the posterior distribution that we are looking for. We will only give the intuition here as a refresher. One: There is a unique stationary stateSince possible transitions depend only on the current and the proposed values of $\theta$, the successive values of $\theta$ in a Metropolis-Hastings sample constitute a Markov chain. Recall that for a Markov chain with a transition matrix $T$$$\pi = \pi T$$means that $\pi$ is a stationary distribution. If it is possible to go from any state to any other state, then the matrix is irreducible. If in addition, it is not possible to get stuck in an oscillation, then the matrix is also aperiodic or mixing. For finite state spaces, irreducibility and aperiodicity guarantee the existence of a unique stationary state. For continuous state space, we need an additional property of positive recurrence - starting from any state, the expected time to come back to the original state must be finite. If we have all 3 properties of irreducibility, aperiodicity and positive recurrence, then there is a unique stationary distribution. The term ergodic is a little confusing - most standard definitions take ergodicity to be equivalent to irreducibility, but often Bayesian texts take ergodicity to mean irreducibility, aperiodicity and positive recurrence, and we will follow the latter convention. For another intuitive perspective, the random walk Metropolis-Hasting algorithm is analogous to a diffusion process. Since all states are communicating (by design), eventually the system will settle into an equilibrium state. This is analogous to converging on the stationary state. Two: The stationary state is the posterior probability distributionWe will consider the simplest possible scenario for an explicit calculation. Suppose we have a two-state system where the posterior probabilities are $\theta$ and $1 - \theta$. Suppose $\theta \lt 0.5$. So we have the following picture with the Metropolis-Hastings algorithm:and we find the stationary distribution $\pi = \left( \begin{array}{cc} p & 1-p \end{array} \right)$ by solving$$\begin{align}\left( \begin{array}{cc} p & 1-p \end{array} \right) &=\left( \begin{array}{cc} p & 1-p \end{array} \right) \left( \begin{array}{cc}0 & 1 \\\frac{\theta}{1-\theta} & 1-\frac{\theta}{1-\theta} \end{array} \right)\end{align}$$to be $\pi = \left( \begin{array}{cc} \theta & 1-\theta \end{array} \right)$, which is the posterior distribution.The final point is that a stationary distribution has to follow the detailed balance (reversibility) criterion that says that the probability of being in state $x$ and moving to state $y$ must be the same as the probability of being in state $y$ and moving to state $x$. Or, more briefly,$$\pi(x)T(x \to y) = \pi(y)T(y \to x)$$and the need to make sure that this condition is true accounts for the strange looking acceptance criterion$$\min \left(1, \frac{g(\theta_p \ | \ X) p(\theta \ | \ \theta_p)}{g(\theta \ | \ X) p(\theta_p \ | \ \theta)} \right)$$ IntuitionWe want the stationary distribution $\pi(x)$ to be the posterior distribution $P(x)$. So we set$$P(x)T(x \to y) = P(y)T(y \to x)$$Rearranging, we get$$\frac{T(x \to y)}{T(y \to x)} = \frac{P(y)}{P(x)}$$We split the transition probability into separate proposal $q$ and acceptance $A$ parts, and after a little algebraic rearrangement get$$\frac{A(x \to y)}{A(y \to x)} = \frac{P(y) \, q(y \to x)}{P(x) \, q(x \to y)}$$An acceptance probability that meets this condition is$$A(x \to y) = \min \left(1, \frac{P(y) \, q(y \to x)}{P(x) \, q(x \to y)} \right)$$since $A$ in the numerator and denominator are both bounded above by 1.See [Chib and Greenberg](https://eml.berkeley.edu/reprints/misc/understanding.pdf) for algebraic details. The Gibbs samplerSuppose we have a vector of parameters $\theta = (\theta_1, \theta_2, \dots, \theta_k)$, and we want to estimate the joint posterior distribution $p(\theta | X)$. Suppose we can find and draw random samples from all the conditional distributions $$p(\theta_1 | \theta_2, \dots \theta_k, X) \\p(\theta_2 | \theta_1, \dots \theta_k, X) \\\dots \\p(\theta_k | \theta_1, \theta_2, \dots, X) $$With Gibbs sampling, the Markov chain is constructed by sampling from the conditional distribution for each parameter $\theta_i$ in turn, treating all other parameters as observed. When we have finished iterating over all parameters, we are said to have completed one cycle of the Gibbs sampler. Since hierarchical models are typically set up as products of conditional distributions, the Gibbs sampler is ubiquitous in Bayesian modeling. Where it is difficult to sample from a conditional distribution, we can sample using a Metropolis-Hastings algorithm instead - this is known as Metropolis within Gibbs. Gibbs sampling is a type of random walk through parameter space, and hence can be thought of as a Metropolis-Hastings algorithm with a special proposal distribution. At each iteration in the cycle, we are drawing a proposal for a new value of a particular parameter, where the proposal distribution *is* the conditional posterior probability of that parameter. This means that the proposal move is *always* accepted. Hence, if we can draw samples from the conditional distributions, Gibbs sampling can be much more efficient than regular Metropolis-Hastings. More formally, we want to show that $$\frac{P(y) \, q(y \to x)}{P(x) \, q(x \to y)} = 1$$We start by noting that $P(x_{-i})$ is the same as $P(y_{-i})$ since apart from the component $i$, the old state and the proposed new state are identical in Gibbs sampling. We also recall that $$P(x_i \mid x_{-i}) \, P(x_{-i}) = P(x_i, x_{-i}) = P(x)$$ by definition of conditional probability. So we have$$\begin{align}\frac{P(y) \, q(y \to x)}{P(x) \, q(x \to y)} &= \frac{P(y_i \mid y_{-1}) \, P(y_{-i})\, P(x_i \mid x_{-i}) }{P(x_i \mid x_{-i}) \, P(x_{-i})\, P(y_i \mid y_{-1})} &= 1\end{align}$$**Advantages of Gibbs sampling**- No need to tune proposal distribution- Proposals are always accepted**Disadvantages of Gibbs sampling**- Need to be able to derive conditional probability distributions - Need to be able to (cheaply) draw random samples from conditional probability distributions- Can be very slow if parameters are correlated because you cannot take "diagonal" steps (draw picture to illustrate) Motivating example We will use the toy example, familiar from the EM lecture, of estimating the bias of two coins given sample pairs $(z_1, n_1)$ and $(z_2, n_2)$ where $z_i$ is the number of heads in $n_i$ tosses for coin $i$. Setup
###Code
def bern(theta, z, N):
"""Bernoulli likelihood with N trials and z successes."""
return np.clip(theta**z * (1-theta)**(N-z), 0, 1)
def bern2(theta1, theta2, z1, z2, N1, N2):
"""Bernoulli likelihood with N trials and z successes."""
return bern(theta1, z1, N1) * bern(theta2, z2, N2)
def make_thetas(xmin, xmax, n):
xs = np.linspace(xmin, xmax, n)
widths =(xs[1:] - xs[:-1])/2.0
thetas = xs[:-1]+ widths
return thetas
from mpl_toolkits.mplot3d import Axes3D
def make_plots(X, Y, prior, likelihood, posterior, projection=None):
fig, ax = plt.subplots(1,3, subplot_kw=dict(projection=projection, aspect='equal'), figsize=(12,3))
if projection == '3d':
ax[0].plot_surface(X, Y, prior, alpha=0.3, cmap=plt.cm.jet)
ax[1].plot_surface(X, Y, likelihood, alpha=0.3, cmap=plt.cm.jet)
ax[2].plot_surface(X, Y, posterior, alpha=0.3, cmap=plt.cm.jet)
for ax_ in ax: ax_._axis3don = False
else:
ax[0].contour(X, Y, prior, cmap=plt.cm.jet)
ax[1].contour(X, Y, likelihood, cmap=plt.cm.jet)
ax[2].contour(X, Y, posterior, cmap=plt.cm.jet)
ax[0].set_title('Prior')
ax[1].set_title('Likelihood')
ax[2].set_title('Posteior')
plt.tight_layout()
thetas1 = make_thetas(0, 1, 101)
thetas2 = make_thetas(0, 1, 101)
X, Y = np.meshgrid(thetas1, thetas2)
###Output
_____no_output_____
###Markdown
Analytic solution
###Code
a = 2
b = 3
z1 = 11
N1 = 14
z2 = 7
N2 = 14
prior = stats.beta(a, b).pdf(X) * stats.beta(a, b).pdf(Y)
likelihood = bern2(X, Y, z1, z2, N1, N2)
posterior = stats.beta(a + z1, b + N1 - z1).pdf(X) * stats.beta(a + z2, b + N2 - z2).pdf(Y)
make_plots(X, Y, prior, likelihood, posterior)
make_plots(X, Y, prior, likelihood, posterior, projection='3d')
###Output
_____no_output_____
###Markdown
Grid approximation
###Code
def c2d(thetas1, thetas2, pdf):
width1 = thetas1[1] - thetas1[0]
width2 = thetas2[1] - thetas2[0]
area = width1 * width2
pmf = pdf * area
pmf /= pmf.sum()
return pmf
_prior = bern2(X, Y, 2, 8, 10, 10) + bern2(X, Y, 8, 2, 10, 10)
prior_grid = c2d(thetas1, thetas2, _prior)
_likelihood = bern2(X, Y, 1, 1, 2, 3)
posterior_grid = _likelihood * prior_grid
posterior_grid /= posterior_grid.sum()
make_plots(X, Y, prior_grid, likelihood, posterior_grid)
make_plots(X, Y, prior_grid, likelihood, posterior_grid, projection='3d')
###Output
_____no_output_____
###Markdown
Metropolis
###Code
a = 2
b = 3
z1 = 11
N1 = 14
z2 = 7
N2 = 14
prior = lambda theta1, theta2: stats.beta(a, b).pdf(theta1) * stats.beta(a, b).pdf(theta2)
lik = partial(bern2, z1=z1, z2=z2, N1=N1, N2=N2)
target = lambda theta1, theta2: prior(theta1, theta2) * lik(theta1, theta2)
theta = np.array([0.5, 0.5])
niters = 10000
burnin = 500
sigma = np.diag([0.2,0.2])
thetas = np.zeros((niters-burnin, 2), np.float)
for i in range(niters):
new_theta = stats.multivariate_normal(theta, sigma).rvs()
p = min(target(*new_theta)/target(*theta), 1)
if np.random.rand() < p:
theta = new_theta
if i >= burnin:
thetas[i-burnin] = theta
kde = stats.gaussian_kde(thetas.T)
XY = np.vstack([X.ravel(), Y.ravel()])
posterior_metroplis = kde(XY).reshape(X.shape)
make_plots(X, Y, prior(X, Y), lik(X, Y), posterior_metroplis)
make_plots(X, Y, prior(X, Y), lik(X, Y), posterior_metroplis, projection='3d')
###Output
_____no_output_____
###Markdown
Gibbs
###Code
a = 2
b = 3
z1 = 11
N1 = 14
z2 = 7
N2 = 14
prior = lambda theta1, theta2: stats.beta(a, b).pdf(theta1) * stats.beta(a, b).pdf(theta2)
lik = partial(bern2, z1=z1, z2=z2, N1=N1, N2=N2)
target = lambda theta1, theta2: prior(theta1, theta2) * lik(theta1, theta2)
theta = np.array([0.5, 0.5])
niters = 10000
burnin = 500
sigma = np.diag([0.2,0.2])
thetas = np.zeros((niters-burnin,2), np.float)
for i in range(niters):
theta = [stats.beta(a + z1, b + N1 - z1).rvs(), theta[1]]
theta = [theta[0], stats.beta(a + z2, b + N2 - z2).rvs()]
if i >= burnin:
thetas[i-burnin] = theta
kde = stats.gaussian_kde(thetas.T)
XY = np.vstack([X.ravel(), Y.ravel()])
posterior_gibbs = kde(XY).reshape(X.shape)
make_plots(X, Y, prior(X, Y), lik(X, Y), posterior_gibbs)
make_plots(X, Y, prior(X, Y), lik(X, Y), posterior_gibbs, projection='3d')
###Output
_____no_output_____
###Markdown
Hierarchical models--- Hierarchical models have the following structure - first we specify that the data come from a distribution with parameters $\theta$$$X \sim f(X\ | \ \theta)$$and that the parameters themselves come from another distribution with hyperparameters $\lambda$$$\theta \sim g(\theta \ | \ \lambda)$$and finally that $\lambda$ comes from a prior distribution$$ \lambda \sim h(\lambda)$$More levels of hierarchy are possible - i.e you can specify hyper-hyperparameters for the distribution of $\lambda$ and so on.The essential idea of the hierarchical model is because the $\theta$s are not independent but rather are drawn from a common distribution with parameter $\lambda$, we can share information across the $\theta$s by also estimating $\lambda$ at the same time. As an example, suppose have data about the proportion of heads after some number of tosses from several coins, and we want to estimate the bias of each coin. We also know that the coins come from the same mint and so might share some common manufacturing defect. There are two extreme approaches - we could estimate the bias of each coin from its coin toss data independently of all the others, or we could pool the results together and estimate the same bias for all coins. Hierarchical models provide a compromise where we shrink individual estimates towards a common estimate.Note that because of the conditionally independent structure of hierarchical models, Gibbs sampling is often a natural choice for the MCMC sampling strategy. Gibbs sampler example from [Robert and Casella, 10.17](http://www.springer.com/statistics/statistical+theory+and+methods/book/978-0-387-21239-5)Suppose we have data of the number of failures ($y_i$) for each of 10 pumps in a nuclear plant. We also have the times ($_i$) at which each pump was observed. We want to model the number of failures with a Poisson likelihood, where the expected number of failure $\lambda_i$ differs for each pump. Since the time which we observed each pump is different, we need to scale each $\lambda_i$ by its observed time $t_i$.We now specify the hierarchical model - note change of notation from the overview above - that $\theta$ is $\lambda$ (parameter) and $\lambda$ is $\beta$ (hyperparameter) simply because $\lambda$ is traditional for the Poisson distribution parameter. The likelihood $f$ is $$\prod_{i=1}^{10} \text{Poisson}(\lambda_i t_i)$$We let the prior $g$ for $\lambda$ be $$\lambda \sim \text{Gamma}(\alpha, \beta)$$with $\alpha = 1.8$ (an improper prior whose integral does not sum to 1)and let the hyperprior $h$ for $\beta$ to be $$\beta \sim \text{Gamma}(\gamma, \delta)$$with $\gamma = 0.01$ and $\delta = 1$.There are 11 unknown parameters (10 $\lambda$s and $\beta$) in this hierarchical model.The posterior is $$p(\lambda, \beta \ | \ y, t) = \prod_{i=1}^{10} \text{Poisson}(\lambda_i t_i) \times \text{Gamma}(\alpha, \beta) \times \text{Gamma}(\gamma, \delta)$$with the conditional distributions needed for Gibbs sampling given by$$p(\lambda_i \ | \ \lambda_{-i}, \beta, y, t) = \text{Gamma}(y_i + \alpha, t_i + \beta)$$and $$p(\beta \ | \ \lambda, y, t) = \text{Gamma}(10\alpha + \gamma, \delta + \sum_{i=1}^10 \lambda_i)$$
###Code
from numpy.random import gamma as rgamma # rename so we can use gamma for parameter name
def lambda_update(alpha, beta, y, t):
return rgamma(size=len(y), shape=y+alpha, scale=1.0/(t+beta))
def beta_update(alpha, gamma, delta, lambd, y):
return rgamma(size=1, shape=len(y) * alpha + gamma, scale=1.0/(delta + lambd.sum()))
def gibbs(niter, y, t, alpha, gamma, delta):
lambdas_ = np.zeros((niter, len(y)), np.float)
betas_ = np.zeros(niter, np.float)
lambda_ = y/t
for i in range(niter):
beta_ = beta_update(alpha, gamma, delta, lambda_, y)
lambda_ = lambda_update(alpha, beta_, y, t)
betas_[i] = beta_
lambdas_[i,:] = lambda_
return betas_, lambdas_
###Output
_____no_output_____
###Markdown
Setup
###Code
alpha = 1.8
gamma = 0.01
delta = 1.0
beta0 = 1
y = np.array([5, 1, 5, 14, 3, 19, 1, 1, 4, 22], np.int)
t = np.array([94.32, 15.72, 62.88, 125.76, 5.24, 31.44, 1.05, 1.05, 2.10, 10.48], np.float)
niter = 1000
betas, lambdas = gibbs(niter, y, t, alpha, gamma, delta)
print('%.3f' % betas.mean())
print('%.3f' % betas.std(ddof=1))
print(lambdas.mean(axis=0))
print(lambdas.std(ddof=1, axis=0))
plt.figure(figsize=(8, 16))
for i in range(len(lambdas.T)):
plt.subplot(5,2,i+1)
plt.plot(lambdas[::10, i]);
plt.title('Trace for $\lambda$%d' % i)
plt.tight_layout()
###Output
_____no_output_____ |
notebooks/bibliographie.ipynb | ###Markdown
Modèles de propagation de virus La propagation d’un agent infectieux au sein d’une population est un phénomène dynamique : les effectifs d’individus sains et malades évoluent dans le temps, en fonction des contacts au cours desquels cet agent passe d’un individu infecté à un individu sain non immunisé, l’infectant à son tour. Un tel phénomène peut être étudié en le modélisant par des équations différentielles et en déterminant son comportement à travers la résolution numérique de ces équations
###Code
import datetime
import os
import yaml
import numpy as np
import pandas as pd
from IPython.display import Image
PATH = "C:/Users/kami/Desktop/AMSE/MAG 3/projet corona/"
###Output
_____no_output_____
###Markdown
I-Modèle SIR : Modèle de base
###Code
Image(filename = PATH + "im1.PNG")
###Output
_____no_output_____
###Markdown
**S : désigne les individus Sains (ou Susceptibles d’être infectés),** **I : désigne ceux qui sont Infectés,** **R : désigne ceux qui sont Rétablis (Recovered en anglais) et ne peuvent plus être infectés** L’effectif de chacune de ces populations est évidemment variable dans le temps, modélisable de ce fait par une fonction de la variable indépendante t, le temps : S(t), I(t) et R(t). Si, au cours de la propagation de l’épidémie, l’effectif P de la population totale peut être considéré constant, on écrit:**S(t) + I(t) + R(t) = P**
###Code
Image(filename = PATH + "im2.PNG")
###Output
_____no_output_____
###Markdown
II-Modèle de base étendu : Modèle SIR prenant en compte les contaminés
###Code
Image(filename = PATH + "im3.PNG")
###Output
_____no_output_____
###Markdown
Un nouveau compartiment peut également être introduit pour prendre en compte le fait qu’un individu peut être contaminé sans être encore contagieux.On introduit donc une quatrième variable d’état C, dont la valeur est l’effectif des individus dans cet état contaminé non contagieux. ν est la durée, en jours, de la période d’incubation.
###Code
Image(filename = PATH + "im4.PNG")
###Output
_____no_output_____
###Markdown
III-Modèle SIR prenant en compte les contaminés et les morts Le modèle suppose une subdivision de la population en 5 groupes.
###Code
Image(filename = PATH + "im5.PNG")
###Output
_____no_output_____
###Markdown
• **S:le groupe des Sains regroupant l’ensemble des personnes saines dans la population considérée,** • **C : ce compartiment regroupe toutes les personnes susceptibles d’être porteurs (contaminées) ou capables de développer des symptômes, car ayant été en contact avec une personne infectée,** • **I: le compartiment des Infectés regroupe l’ensemble des personnes testées positives,** • **R: les Rétablies qui sont l’ensemble des personnes guéries après avoir été infectées,** • **F: le groupe des cas critiques (fatal) regroupant les personnes déclarées positives et développant la forme la plus grave (fatale) de l’infection.**
###Code
Image(filename = PATH + "im6.PNG")
###Output
_____no_output_____
###Markdown
Application modèle Nous appliquons le dernier modèle sur les données dun coronavirus
###Code
# Lecture du fichier d'environnement
ENV_FILE = '../env.yaml'
with open(ENV_FILE) as f:
params = yaml.load(f) #, Loader=yaml.FullLoader)
# Initialisation des chemins vers les fichiers
ROOT_DIR = os.path.dirname(os.path.abspath(ENV_FILE))
DATA_FILE = os.path.join(ROOT_DIR,
params['directories']['processed'],
params['files']['all_data'])
# Lecture du fichier de données
epidemie_df = (pd.read_csv(DATA_FILE, parse_dates=['Last Update'])
.assign(day=lambda _df: _df['Last Update'].dt.date)
.drop_duplicates(subset=['Country/Region', 'Province/State', 'day'])
[lambda df: df['day'] <= datetime.date(2020, 3, 20)]
)
epidemie_df.head()
###Output
_____no_output_____
###Markdown
Initialisation
###Code
country = "France"
total_population = 66_990_000
###Output
_____no_output_____
###Markdown
Processing
###Code
country_df = (epidemie_df[epidemie_df['Country/Region'] == country]
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
country_df.shape
country_df['infected'] = country_df['Confirmed'].diff()
country_df.head()
infected_population = country_df.loc[2:]['infected']
nb_steps = len(infected_population)
###Output
_____no_output_____
###Markdown
Modèle avec paramètres par défaut
###Code
beta, gamma, mu,teta = [0.01, 0.1, 0.05, 0.07]
def p(t, y):
S = y[0]
C = y[1]
I = y[2]
R = y[3]
F = y[4]
return([-beta*S*I, beta*S*I-C*teta, C*teta-gamma*I-mu*I, gamma*I,mu*I])
from scipy.integrate import solve_ivp
solution = solve_ivp(p, [0, nb_steps], [total_population, 1, 1,0,0],t_eval=np.arange(0,nb_steps,1))
solution
fig = plt.figure(figsize=(12,5),facecolor='#dddddd')
plt.plot(solution.t,solution.y[0],'b')
plt.plot(solution.t,solution.y[1],'c')
plt.plot(solution.t,solution.y[2],'g')
plt.plot(solution.t,solution.y[3],'m')
plt.plot(solution.t,solution.y[4], 'r')
plt.plot(solution.t, infected_population, "k*:")
plt.xlabel('Time(days)')
plt.ylabel('Number')
plt.grid("True")
plt.legend(["Susceptible","Contamined","Infected","Recovered","Fatal","Original data"])
###Output
_____no_output_____
###Markdown
Modèle avec paramètres optimisés
###Code
def sumsq_error(parameters):
beta, gamma,mu,teta = parameters
def SIR(t, y):
S = y[0]
C = y[1]
I = y[2]
R = y[3]
return([-beta*S*I, beta*S*I-C*teta, C*teta-gamma*I-mu*I, gamma*I])
solution_optimal = solve_ivp(p, [0, nb_steps], [total_population, 1, 1,0,0],t_eval=np.arange(0,nb_steps,1))
return(sum((solution_optimal.y[2]-infected_population)**2))
%%time
from scipy.optimize import minimize
msol = minimize(sumsq_error, [0.01, 0.1, 0.05, 0.07], method='Nelder-Mead')
msol.x
beta, gamma,mu,teta = msol.x
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
solution_optimal = solve_ivp(p, [0, nb_steps], [total_population, 1,1,0,0],t_eval=np.arange(0,nb_steps,1))
solution_optimal
fig = plt.figure(figsize=(12,5),facecolor='#dddddd')
plt.plot(solution.t,solution.y[0],'b')
plt.plot(solution.t,solution.y[1],'c')
plt.plot(solution.t,solution.y[2],'g')
plt.plot(solution.t,solution.y[3],'m')
plt.plot(solution.t,solution.y[4], 'r')
plt.plot(solution.t, infected_population, "k*:")
plt.xlabel('Time(days)')
plt.ylabel('Number')
plt.grid("True")
plt.legend(["Susceptible","Contamined","Infected","Recovered","Fatal","Original data"])
###Output
_____no_output_____
###Markdown
Tatiana MAIA FERREIRANicolas ROUSSEAU Bibliographie : - https://numbersandshapes.net/post/fitting_sir_to_data_in_python/- https://www.lewuathe.com/covid-19-dynamics-with-sir-model.htmlLe modèle SIR est un modèle compartimental qui permet de modéliser la propagation d'un au cours du temps. On sépare l'ensemble de la population en 3 catégories : les Susceptibles, les Infectés et les Rétablis (et immunisés contre la maladie) qui composent l'ensemble de la population du pays (N).Pour estimer le modèle, on se base sur le taux de contact moyen dans la population ainsi que l'inverse de la période infectieuse moyenne.On part de l'hypothèse que la population est fixe (pas de morts ni de naissances).On suit l'évolution des courbes des 3 classes à travers le temps qui au fur et à mesure se croisent les unes avec les autres. Au début les infectés sont plus nombreux que les rétablis et croissent plus fortement. À la fin, lorsque l'épidémie s'est dissipé, le nombre de rétablis capte la plupart des individus et à dépasser le nombre de personnes infectées.Il existe des variantes au modèle SIR. On peut notamment évoquer le modèle SIS ajoute le fait qu'un individu est initialement sain (S), il peut par la suite devenir infecté (I) puis être guéri (S). Si on retire l'hypothèse de la possibilité de guérison, alors il s'agirait d'un modèle SI.On peut également évoquer le modèle de diffusion de Bass. À la différence des 3 premiers évoqués précédemment, ce modèle a été dévéloppé initialement pour étudier les phénomènes de diffusion de produits et services, alors que les précédent sont principalement utilisés dans le domaine de la médecine. Ce modèle caractérise le développement du phénomène à travers le temps et décrit la forme "left-skewed" de la propagation : la phase d'expansion est rapide avec arrivée à un pic puis retombée.
###Code
import datetime
import os
import yaml
import numpy as np
import pandas as pd
import matplotlib.pyplot as plt
%matplotlib inline
# Lecture du fichier d'environnement
ENV_FILE = '../env.yaml'
with open(ENV_FILE) as f:
params = yaml.load(f) #, Loader=yaml.FullLoader)
# Initialisation des chemins vers les fichiers
ROOT_DIR = os.path.dirname(os.path.abspath(ENV_FILE))
DATA_FILE = os.path.join(ROOT_DIR,
params['directories']['processed'],
params['files']['all_data'])
# Lecture du fichier de données
epidemie_df = (pd.read_csv(DATA_FILE, parse_dates=['Last Update'])
.assign(day=lambda _df: _df['Last Update'].dt.date)
.drop_duplicates(subset=['Country/Region', 'Province/State', 'day'])
[lambda df: df['day'] <= datetime.date(2020, 3, 12)]
)
# Définition d'une fonction qui récupère un pays et en fait un subset du dataframe initiale
def get_country(self, country):
return (epidemie_df[epidemie_df['Country/Region'] == country]
.groupby(['Country/Region', 'day'])
.agg({'Confirmed': 'sum', 'Deaths': 'sum', 'Recovered': 'sum'})
.reset_index()
)
# Monkey Patch pd.DataFrame
pd.DataFrame.get_country = get_country
spain_df = get_country(epidemie_df, "Spain")
spain_df.head()
# On fait la différence entre la ligne n et la ligne n-1
spain_df['infected'] = spain_df['Confirmed'].diff()
# Définition du modèle SIR
beta,gamma = [0.01,0.1]
def SIR(t,y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
# Résolution du modèle SIR
from scipy.integrate import solve_ivp
beta, gamma, alpha = [0.01, 0.1, 0.1]
solution_spain = solve_ivp(SIR, [0, 40], [51_470_000, 1, 0], t_eval=np.arange(0, 40, 1))
solution_spain
def plot_epidemia(solution, infected, susceptible=False):
fig = plt.figure(figsize=(12, 5))
if susceptible:
plt.plot(solution.t, solution.y[0])
plt.plot(solution.t, solution.y[1])
plt.plot(solution.t, solution.y[2])
plt.plot(infected.reset_index(drop=True).index, infected, "k*:")
plt.grid("True")
if susceptible:
plt.legend(["Susceptible", "Infected", "Recovered", "Original Data"])
else:
plt.legend(["Infected", "Recovered", "Original Data"])
plt.show()
plot_epidemia(solution_spain, spain_df.loc[2:]['infected'])
def sumsq_error(parameters):
beta, gamma = parameters
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
solution = solve_ivp(SIR, [0, nb_steps-1], [total_population, 1, 0], t_eval=np.arange(0, nb_steps, 1))
return(sum((solution.y[1]-infected_population)**2))
total_population = 51_470_000
infected_population = spain_df.loc[2:]['infected']
nb_steps = len(infected_population)
%%time
from scipy.optimize import minimize
msol = minimize(sumsq_error, [0.001, 0.1], method='Nelder-Mead')
msol.x
beta_optimal = msol.x[0]
gamma_optimal = msol.x[1]
print(beta_optimal)
print(gamma_optimal)
beta = beta_optimal
gamma = gamma_optimal
def SIR(t, y):
S = y[0]
I = y[1]
R = y[2]
return([-beta*S*I, beta*S*I-gamma*I, gamma*I])
solution_spain_optimal = solve_ivp(SIR, [0, 40], [51_470_000*0.1, 1, 0], t_eval=np.arange(0, 40, 1))
solution_spain_optimal
###Output
_____no_output_____ |
notebooks/dataset-projections/64/mnist/mnist-convnet-embedding.ipynb | ###Markdown
Choose GPU (this may not be needed on your computer)
###Code
%env CUDA_DEVICE_ORDER=PCI_BUS_ID
%env CUDA_VISIBLE_DEVICES=3
###Output
env: CUDA_DEVICE_ORDER=PCI_BUS_ID
env: CUDA_VISIBLE_DEVICES=3
###Markdown
load packages
###Code
from tfumap.umap import tfUMAP
import tensorflow as tf
import numpy as np
import matplotlib.pyplot as plt
from tqdm.autonotebook import tqdm
import umap
import pandas as pd
###Output
_____no_output_____
###Markdown
Load dataset
###Code
from tensorflow.keras.datasets import mnist
# load dataset
(train_images, Y_train), (test_images, Y_test) = mnist.load_data()
X_train = (train_images/255.).astype('float32')
X_test = (test_images/255.).astype('float32')
X_train = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
# subset a validation set
n_valid = 10000
X_valid = X_train[-n_valid:]
Y_valid = Y_train[-n_valid:]
X_train = X_train[:-n_valid]
Y_train = Y_train[:-n_valid]
# flatten X
X_train_flat = X_train.reshape((len(X_train), np.product(np.shape(X_train)[1:])))
X_test_flat = X_test.reshape((len(X_test), np.product(np.shape(X_test)[1:])))
X_valid_flat= X_valid.reshape((len(X_valid), np.product(np.shape(X_valid)[1:])))
print(len(X_train), len(X_valid), len(X_test))
###Output
50000 10000 10000
###Markdown
define networks
###Code
dims = (28,28,1)
n_components = 64
encoder = tf.keras.Sequential([
tf.keras.layers.InputLayer(input_shape=dims),
tf.keras.layers.Conv2D(
filters=64, kernel_size=3, strides=(2, 2), activation="relu"
),
tf.keras.layers.Conv2D(
filters=128, kernel_size=3, strides=(2, 2), activation="relu"
),
tf.keras.layers.Flatten(),
tf.keras.layers.Dense(units=512, activation="relu"),
tf.keras.layers.Dense(units=512, activation="relu"),
tf.keras.layers.Dense(units=n_components),
])
###Output
_____no_output_____
###Markdown
Create model and train
###Code
embedder = tfUMAP(
direct_embedding=False,
verbose=True,
negative_sample_rate=5,
training_epochs=5,
encoder=encoder,
dims = dims,
)
z = embedder.fit_transform(X_train_flat)
###Output
tfUMAP(dims=(28, 28, 1),
encoder=<tensorflow.python.keras.engine.sequential.Sequential object at 0x7f578ec7ba58>,
negative_sample_rate=5,
optimizer=<tensorflow.python.keras.optimizer_v2.adam.Adam object at 0x7f578ec72cf8>,
tensorboard_logdir='/tmp/tensorboard/20200714-102158',
training_epochs=5)
Construct fuzzy simplicial set
Tue Jul 14 10:21:59 2020 Finding Nearest Neighbors
Tue Jul 14 10:21:59 2020 Building RP forest with 16 trees
Tue Jul 14 10:22:02 2020 parallel NN descent for 16 iterations
0 / 16
1 / 16
2 / 16
3 / 16
4 / 16
Tue Jul 14 10:22:18 2020 Finished Nearest Neighbor Search
Tue Jul 14 10:22:27 2020 Embedding with TensorFlow
###Markdown
Plot model output
###Code
fig, ax = plt.subplots( figsize=(8, 8))
sc = ax.scatter(
z[:, 0],
z[:, 1],
c=Y_train.astype(int)[:len(z)],
cmap="tab10",
s=0.1,
alpha=0.5,
rasterized=True,
)
ax.axis('equal')
ax.set_title("UMAP in Tensorflow embedding", fontsize=20)
plt.colorbar(sc, ax=ax);
###Output
_____no_output_____
###Markdown
View loss
###Code
from tfumap.umap import retrieve_tensors
import seaborn as sns
loss_df = retrieve_tensors(embedder.tensorboard_logdir)
loss_df[:3]
ax = sns.lineplot(x="step", y="val", hue="group", data=loss_df[loss_df.variable=='umap_loss'])
ax.set_xscale('log')
ax.set_yscale('log')
###Output
_____no_output_____
###Markdown
Save output
###Code
from tfumap.paths import ensure_dir, MODEL_DIR
output_dir = MODEL_DIR/'projections'/ 'mnist' / '64'/ 'network'
ensure_dir(output_dir)
embedder.save(output_dir)
loss_df.to_pickle(output_dir / 'loss_df.pickle')
np.save(output_dir / 'z.npy', z)
###Output
_____no_output_____ |
Neural Network Basics.ipynb | ###Markdown
Keras Take Log of Target VariableA good rule when training neural networks is to take the log of the target. You can also scale the target to between 0 and 1.
###Code
y = np.array(data.pop('fare_amount'))
log_y = np.log(y)
max_log_y = log_y.max()
plt.figure(figsize = (10, 6))
sns.distplot(y);
plt.title("Distribution of Target");
plt.figure(figsize = (10, 6))
sns.distplot(log_y);
plt.title("Distribution of Log of Target");
plt.figure(figsize = (10, 6))
sns.distplot(log_y / max_log_y);
plt.title("Distribution of Log of Target Normalized");
###Output
_____no_output_____
###Markdown
Scale FeaturesNeural networks generally have more stable training when the features are normalized to between 0 and 1. Another common option is to subtract the mean and divide by the standard deviation, known as standardization. However, I find normalizing to between 0 and 1 to be a safer option.
###Code
data.head()
from sklearn.preprocessing import MinMaxScaler
features = list(data.drop(columns = ['fare-bin']).columns)
# Fit on training data and scale test data
scaler = MinMaxScaler()
scaled_data = scaler.fit_transform(data.drop(columns = ['fare-bin']))
scaled_test = scaler.transform(test.drop(columns = ['fare-bin']))
###Output
_____no_output_____
###Markdown
Create Stratified Validation Set We'll create a separate set for validating our model. When we split the data, we'll stratify it by the binned fare amount in order to have the same distribution of the target in both the training and validation data.
###Code
from sklearn.model_selection import train_test_split
X_train, X_valid, y_train, y_valid = train_test_split(scaled_data, log_y, random_state = RSEED,
test_size = 1_000_000)
print('Length of training: ', X_train.shape[0])
print('Length of testing: ', X_valid.shape[0])
###Output
Length of training: 3892578
Length of testing: 1000000
###Markdown
Putting Together the ModelOur model will be a dense, fully-connected deep neural network. We will build it using the `Sequential` API. The `Sequential` framework allows us to build a model by adding one layer at a time. (The other option is the `Functional` framework which allows greater control over the model at the cost of increased building time. For the differences refer to [this article](https://jovianlin.io/keras-models-sequential-vs-functional/)).
###Code
from keras import layers, models, optimizers, losses, metrics
from keras import backend as K
###Output
Using TensorFlow backend.
###Markdown
Build ModelBuilding a deep neural network in Keras is pleasantly simple. Using the Sequential API, we can add one layer at a time. For each layer, we need to specify:* The number of neurons* The activation functionThe first layer must also have an `input_shape` or an `input_dim` which in this case is the number of features in our data. The final layer has no activation function because it provides the final estimate. Dense LayersAll the layers are dense, indicating that the inputs are connected to every neuron in the layer. This means that each neuron has one weight for every input. A dense network is the only way to handle stuctured data (at the moment). DropoutDropout is an effective technique for regularizing a neural network (preventing overfitting). It randomly sets the weights of a fraction of the neurons to 0 for each training batch. The reason this works is because it builds resiliency into the network. We'll use a dropout layer after each Dense layers
###Code
model = models.Sequential()
# Input layer
model.add(layers.Dense(16, input_dim = scaled_data.shape[1], activation = 'relu', name = 'input'))
model.add(layers.Dropout(0.5))
# Hidden layers
model.add(layers.Dense(32, activation = 'relu', name = 'hidden-1'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(64, activation = 'relu', name = 'hidden-2'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(128, activation = 'relu', name = 'hidden-3'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1024, activation = 'relu', name = 'hidden-4'))
model.add(layers.Dropout(0.5))
# Prediction layer
model.add(layers.Dense(1, activation = None, name = 'output'))
model.summary()
###Output
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
input (Dense) (None, 16) 480
_________________________________________________________________
dropout_1 (Dropout) (None, 16) 0
_________________________________________________________________
hidden-1 (Dense) (None, 32) 544
_________________________________________________________________
dropout_2 (Dropout) (None, 32) 0
_________________________________________________________________
hidden-2 (Dense) (None, 64) 2112
_________________________________________________________________
dropout_3 (Dropout) (None, 64) 0
_________________________________________________________________
hidden-3 (Dense) (None, 128) 8320
_________________________________________________________________
dropout_4 (Dropout) (None, 128) 0
_________________________________________________________________
hidden-4 (Dense) (None, 1024) 132096
_________________________________________________________________
dropout_5 (Dropout) (None, 1024) 0
_________________________________________________________________
output (Dense) (None, 1) 1025
=================================================================
Total params: 144,577
Trainable params: 144,577
Non-trainable params: 0
_________________________________________________________________
###Markdown
Custom Scoring FunctionKeras does not offer the root mean squared error as one of the scoring functions. Instead, we can write our own custom scoring function to calculate the rmse. I've found that training with a custom scoring metric can be unstable, so we'll actually train with the `mean_squared_error` on the log of the targets. We'll use our custom functions to put the score in a range that we can use for comparison.The first function calculates the root of the mean squared error. The second function converts the predictions and the true values back to the original range. The exponentiation undoes the logarithm to put the values back on the original scale.
###Code
# Root mean squared error calculation
def root_mean_squared_error(y_true, y_pred):
return K.sqrt(K.mean(K.square(y_pred - y_true), axis=-1))
# Get the predictions back on original scale
def convert_error(y_true, y_pred):
return root_mean_squared_error(K.exp(y_true), K.exp(y_pred))
###Output
_____no_output_____
###Markdown
Compile ModelTo compile a model, we need two parts:1. An optimizer: the method used for adjusting the weights on each batch2. A loss: the measure of the model's error that is minimized by the optimization procedureTo monitor performance, we'll addd in out custom metric.
###Code
model.compile(optimizer = optimizers.Adam(),
loss = losses.mean_squared_error,
metrics = [convert_error])
###Output
_____no_output_____
###Markdown
Model Callbacks Model callbacks control aspects of training and allow us to diagnose a model. Here we'll use three callbacks:1. Early stopping: stop training when the validation loss has not improved for a specified number of epochs * Meant to serve as a method to avoid overfitting2. Model checkpointing: every time the validation loss decreases, save a copy of the model. * Allows us to load the best model back in for making predictions3. Tensorboard monitoring: training results are saved to a directory that we can visualize with Tensorboard * Tensorboard is useful for diagnosing the progress of a model If you are running on a local machine, Tensorboard can be used to visualize results in real time. If not, then we can use the saved Tensorboard results to examine the performance after the training run.
###Code
from datetime import datetime
'_'.join(str(datetime.now())[:-7].split())
from keras.callbacks import EarlyStopping, ModelCheckpoint, TensorBoard
calls = [EarlyStopping(monitor = 'val_loss', patience = 3),
ModelCheckpoint(filepath = f'./checkpoints/{"_".join(str(datetime.now())[:-7].split())}.ckpt',
save_best_only = True, save_weights_only = True),
TensorBoard(log_dir = './logs/')]
###Output
_____no_output_____
###Markdown
Train ModelThe model is now ready to be trained. The primary decision we have to make is the batch size. A larger batch size means faster training, but it can also detrimentally affect model performance. It's good practice to experiment with several different batch sizes.
###Code
model.fit(X_train, y_train, batch_size = 1024, epochs = 50, verbose = 1,
callbacks = calls, validation_data = (X_valid, y_valid))
# model.load_weights('./checkpoints/2018-09-07_16:07:02.ckpt')
valid_log_predictions = model.predict(X_valid).reshape((-1))
valid_predictions = np.exp(valid_log_predictions)
rmse = np.sqrt(np.mean(np.square(valid_predictions - np.exp(y_valid))))
mape = 100 * np.mean(abs(valid_predictions - np.exp(y_valid)) / np.exp(y_valid))
print(f'Validation rmse = {round(rmse, 3)}')
print(f'Validation mape = {round(mape, 3)}')
model.evaluate(X_valid, y_valid)
data.head()
data.columns
to_keep = ['abs_lat_diff', 'abs_lon_diff','haversine', 'pickup_frac_day',
'pickup_frac_week', 'pickup_frac_month', 'pickup_frac_year', 'duration']
scaled_data = scaler.fit_transform(data[to_keep])
scaled_test = scaler.transform(test[to_keep])
X_train, X_valid, y_train, y_valid = train_test_split(scaled_data, log_y, random_state = RSEED,
test_size = 1_000_000)
def get_model():
model = models.Sequential()
# Input layer
model.add(layers.Dense(16, input_dim = scaled_data.shape[1], activation = 'relu', name = 'input'))
model.add(layers.Dropout(0.5))
# Hidden layers
model.add(layers.Dense(32, activation = 'relu', name = 'hidden-1'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(64, activation = 'relu', name = 'hidden-2'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(128, activation = 'relu', name = 'hidden-3'))
model.add(layers.Dropout(0.5))
model.add(layers.Dense(1024, activation = 'relu', name = 'hidden-4'))
model.add(layers.Dropout(0.5))
# Prediction layer
model.add(layers.Dense(1, activation = None, name = 'output'))
calls = [EarlyStopping(monitor = 'val_loss', patience = 3),
ModelCheckpoint(filepath = f'./checkpoints/{"_".join(str(datetime.now())[:-7].split())}.ckpt',
save_best_only = True, save_weights_only = True),
TensorBoard(log_dir = './logs/')]
model.compile(optimizer = optimizers.Adam(),
loss = losses.mean_squared_error,
metrics = [convert_error])
return model, calls
model, calls = get_model()
model.fit(X_train, y_train, batch_size = 1024, epochs = 10,
callbacks = calls, validation_data = (X_valid, y_valid))
model.load_weights('checkpoints/2018-09-07_18:10:33.ckpt')
model.evaluate(X_valid, y_valid)
to_keep = ['haversine', 'pickup_frac_day', 'pickup_frac_week',
'pickup_frac_month', 'pickup_frac_year', 'duration']
scaled_data = scaler.fit_transform(data[to_keep])
scaled_test = scaler.transform(test[to_keep])
X_train, X_valid, y_train, y_valid = train_test_split(scaled_data, log_y, random_state = RSEED,
test_size = 1_000_000)
model, calls = get_model()
model.fit(X_train, y_train, batch_size = 1024, epochs = 10,
callbacks = calls, validation_data = (X_valid, y_valid))
rmse = np.sqrt(np.mean(np.square(valid_log_predictions - y_valid)))
rmse
sns.distplot(valid_log_predictions)
valid = np.exp(y_valid)
sns.distplot(valid)
sns.distplot(np.exp(y_train))
sns.distplot(valid_predictions)
np.mean(np.square(valid_predictions - np.exp(y_valid)))
sns.distplot(valid_log_predictions)
sns.distplot(valid_predictions)
rmse = np.sqrt(np.mean(np.square(valid_predictions - y_valid)))
rmse
y_valid
valid_predictions.reshape((-1)) - y_valid
###Output
_____no_output_____
###Markdown
神经网络基础 0. 深度学习的标记法 0.1 神经网络的标记法**一般标记 General comments**:- 上标 $(i)$ 表示第 $i$ 个训练样本;上标 $[l]$ 表示第 $l$ 层。**大小 Sizes**:- $m$: 数据集中的样本数量- $n_x$: 输入大小(特征数)- $n_y$: 输出大小(分类的数量)- $n_h^{[l]}$: 第 $l$ 层的隐藏神经元数量- $L$: 神经网络的总层数这样,在for循环中,可以定义 $n_x = n_h^{[0]}, n_y = n_h^{[L+1]}$**对象 Objects**- $X \in \mathbb{R}^{n_x \times m}$ 是输入矩阵- $x^{(i)} \in \mathbb{R}^{n_x}$ 是第 $i$ 个样本,表示为一个 $n_x$ 维的列向量- $Y \in \mathbb{R}^{n_y \times m}$ 是标签矩阵- $y^{(i)} \in \mathbb{R}^{n_y}$ 是第 $i$ 个样本对应的标签,表示为一个 $n_y$ 维的列向量- $W^{[l]} \in \mathbb{R}^{下一层的神经元数量 \times 上一层的神经元数量}$ 是权重矩阵,上标 $[l]$ 表示所在层- $b^{[l]} \in \mathbb{R}^{下一层的神经元数量}$ 是第 $l$ 层的截距向量- $\hat{y} \in \mathbb{R}^{n_y}$ 是预测结果向量,也可以表示为 $a^{[L]}$**常见的前向传播公式**- $ a = g^{[l]}(W_xx^{(i)} + b_1) = g^{[l]}(z_1)$, $g^{[l]}$表示第l层的激活函数- $ \hat{y}^{(i)} = softmax(W_hh + b_2) $- 一般的激活公式: $a_j^{[l]} = g^{[l]}(\sum_k w_{jk}^{[l]}a_k^{[l-1]} + b_j^{[l]}) = g^{[l]}(z_j^{[l]}) $- $J(x,W,b,y)$ 或 $J(\hat{y}, y)$ 表示成本函数**成本函数的例子**- $J_{CE}(\hat{y}, y) = -\sum_{i=0}^m y^{(i)}log\hat{y}^{(i)}$- $J_1(\hat{y}, y) = \sum_{i=0}^m |y^{(i)} - \hat{y}^{(i)}|$ 0.2 深度学习的标记法对于使用图结构来表示的神经网络- 节点表示输入、激活单元或输出- 变表示权重或截距 0.3 特别注意!为了计算的便利,神经网络和传统机器学习在表示层面上,有这样两个显著的不同:- 相对于传统机器学习,神经网络的输入矩阵 $X$ 是转置过的 $n \times m$ 矩阵,其中每一列是一个样本,每一行是一个特征。传统机器学习习惯上 $X$ 是一个 $m \times n$ 矩阵,其中每一行是一个样本,每一列是一个特征。相应的,标签矩阵 $y$ 也经过了转置。- 在传统机器学习中,每个样本作为输入向量通常会加入一个 $x_0=1$ 的变量,使得输入矩阵成为 $m \times (n+1)$ 的矩阵,从而输入向量和权重相乘的计算可以写作 $\hat{y} = \theta^Tx$。神经网络中通常将截距项和权重项分开,等同于 $b=\theta_0$ 而 $w = [\theta_1;\theta_2;...;\theta_n]$。 1. 二分类在二分类问题中,输出结果是离散型的二值。例子:判定图片里是否包含猫这里的目标是训练一个分类器,将图像作为输入并表示为特征向量 $x$,预测对应的标签 $y$ 是1还是0。在这里0、1的含义是这张图片包含猫(1)或不包含猫(0)。在计算机中图像通常被存储成三个独立的矩阵,分别对应着图像的红、绿、蓝三个通道。三个矩阵和图片的像素大小相同,比如上面这张图片为64像素X64像素,相应地,三个矩阵(RGB)也都是 $64 \times 64$。矩阵中的值表示着像素中对应通道颜色的强度,三个矩阵中的所有值一起构成了 $n$ 维的特征向量。在模式识别和机器学习中,特征向量表示一个对象,在这个例子中,表示图片包含猫或不包含猫。要创建这个特征向量 $x$,所有的像素颜色强度将被重新整合。特征向量 $x$ 的维度为 $n_x = 64 \times 64 \times 3 = 12288$。 2. 逻辑回归逻辑回归是一种解决监督学习中二分类问题的学习算法。逻辑学习的目标是最小化预测值和训练数据标签值之间的误差。依然是训练识别猫的图片的分类器为例,给定一个特征向量 $x$,逻辑回归算法将能够给出图片中包含猫的概率$$\hat{y} = P(y=1|x), 其中0 \leq \hat{y} \leq 1$$逻辑回归中包含以下参数:- 输入特征向量:$x \in \mathbb{R}^{n_x}$,其中 $n_x$ 是特征数- 训练标签:$y \in \{0, 1\}$- 权重:$w \in \mathbb{R}^{n_x}$,其中 $n_x$ 是特征数- 阈值:$b \in \mathbb{R}$- 输出:$\hat{y} = \sigma(w^Tx+b)$- Sigmoid函数:$s = \sigma(w^Tx+b) = \sigma(z) = \frac{1}{1+e^{-z}}$$(w^Tx+b)$ 是一个线性函数,由于这里我们需要一个概率表示,输出值需要在 $[0,1]$ 区间内,因此使用了Sigmoid函数。Sigmoid函数的值域为 $[0,1]$,如上图所示。观察Sigmoid函数的图像,可以看到如下一些性质:- 如果 $z$ 是一个非常大的正数,那么 $\sigma(z)=1$- 如果 $z$ 非常小,或者说是一个非常大的负数,那么 $\sigma(z)=0$- 如果 $z=0$,那么 $\sigma(z) = 0.5$ 3. 逻辑回归的成本函数训练模型获得参数 $w$ 和 $b$,需要首先定义**成本函数 cost function****损失函数 Loss function**:损失函数衡量着预测值($\hat{y}^{(i)}$)和期望输出值($y^{(i)}$)之间的差异。换言之,损失函数计算单个训练样本的误差值。$$ L(\hat{y}^{(i)}, y^{(i)}) = \frac{1}{2}(\hat{y}^{(i)} - y^{(i)})^2 $$$$ L(\hat{y}^{(i)}, y^{(i)}) = -(y^{(i)}log(\hat{y}^{(i)}) + (1-y^{(i)})log(1-\hat{y}^{(i)}))$$其中,第一个平方误差的损失函数,会造成非凸的优化目标,逻辑回归通常使用第二个Logloss作为损失函数。**成本函数 Cost function**:成本函数是整个训练集中每个训练样本的损失函数值的均值。最终找到的参数 $w$ 和 $b$ 应该使全局的成本函数取得最小值。$$ J(w, b) = \frac{1}{m}\sum_{i=1}^m L(\hat{y}^{(i)}, y^{(i)}) = -\frac{1}{m}\sum_{i=1}^m[y^{(i)}log(\hat{y}^{(i)}) + (1-y^{(i)})log(1-\hat{y}^{(i)})] $$ 4. 导数、偏导、链式求导法则复习一元函数导数、多元函数偏导、链式求导法则,不赘述。 5. 梯度下降求成本函数的偏导,根据偏导和学习速率进行梯度更新。较为基础,不赘述。 6. 向量化在用numpy写深度学习算法时,应当尽可能地避免使用for循环。numpy自带了很多方便的向量化操作,使用这些操作会比显式的for循环快得多。numpy底层的C或Fortran代码,会利用CPU的SIMD并行指令,在CPU层面实现单指令多数据流,从而更好地利用CPU的并行能力。同样的技巧也适用于GPU,事实上GPU对于处理这样SIMD指令的能力比CPU还强。 7. Show Me The Code在这里,我们将构建一个逻辑回归分类器,用来识别包含猫的图片。**要求:**- 除非明确指出,在代码中不使用循环**通过这个编程练习,可以学到:**- 构建学习算法的基本架构,包括: - 初始化参数 - 计算成本函数及其梯度 - 使用优化算法(梯度下降)- 将上述三个函数组合为模型函数 7.1 三方包首先,运行下面的代码块,来引入在这个编程练习中所需要的包。 - [numpy](www.numpy.org) 是Python生态圈中进行科学计算的基础包。- [h5py](http://www.h5py.org) 是和存储为H5文件的数据集进行交互的通用包。- [matplotlib](http://matplotlib.org) 是Python生态圈中著名的绘图包。- [PIL](http://www.pythonware.com/products/pil/) 和 [scipy](https://www.scipy.org/) 用来使用图片对模型进行测试。
###Code
import numpy as np
import matplotlib.pyplot as plt
import h5py
import scipy
from PIL import Image
from scipy import ndimage
from lr_utils import load_dataset
%matplotlib inline
###Output
_____no_output_____
###Markdown
7.2 问题总览**问题表述**: 给定一个数据集("data.h5"),包括: - 包含 m_train 张图片的训练集,标记为猫(y=1)或非猫(y=0) - 包含 m_test 张图片的测试集,标记为猫或非猫 - 每张图片表示为 (num_px, num_px, 3)的矩阵,其中3表示RGB三通道。因而,每张图片都是正方形的(长=num_px,宽=num_px)我们会构建一个简单的图像识别算法,这个算法希望能够准确地分类猫和非猫。用下面的代码可以导入数据集。
###Code
# Loading the data (cat/non-cat)
train_set_x_orig, train_set_y, test_set_x_orig, test_set_y, classes = load_dataset()
###Output
_____no_output_____
###Markdown
这里我们对图像数据(包括训练和测试)加了后缀_orig,因为接下来我们会对这些数据进行预处理。预处理后会得到train_set_x和test_set_x。(train_set_y和test_set_y)不需要预处理。train_set_x_orig和test_set_x_orig中的每一行代表了一张图片的数组表示。我们可以通过下面的代码进行可视化,改变`index`的值并重新执行,还可以看到其它的图片。
###Code
# Example of a picture
index = 25
plt.imshow(train_set_x_orig[index])
print ("y = " + str(train_set_y[:, index]) + ", it's a '" + classes[np.squeeze(train_set_y[:, index])].decode("utf-8") + "' picture.")
###Output
y = [1], it's a 'cat' picture.
###Markdown
许多深度学习软件的bug都来自于矩阵或向量的维度不匹配。如果我们始终对矩阵或向量的维度有准确把握,开发的过程中可以消除很多bug。**练习** 获得以下值: - m_train (训练集样本数) - m_test (测试集样本数) - num_px (训练图片的长和宽)注意,`train_set_x_orig`是一个numpy数组,它的形状是 (m_train, num_px, num_px, 3)。比如,可以通过`train_set_x_orig.shape[0]`来获取`m_train`。
###Code
### START CODE HERE ### (≈ 3 lines of code)
m_train = train_set_x_orig.shape[0]
m_test = test_set_x_orig.shape[0]
num_px = train_set_x_orig.shape[1]
### END CODE HERE ###
print ("Number of training examples: m_train = " + str(m_train))
print ("Number of testing examples: m_test = " + str(m_test))
print ("Height/Width of each image: num_px = " + str(num_px))
print ("Each image is of size: (" + str(num_px) + ", " + str(num_px) + ", 3)")
print ("train_set_x shape: " + str(train_set_x_orig.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x shape: " + str(test_set_x_orig.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
###Output
Number of training examples: m_train = 209
Number of testing examples: m_test = 50
Height/Width of each image: num_px = 64
Each image is of size: (64, 64, 3)
train_set_x shape: (209, 64, 64, 3)
train_set_y shape: (1, 209)
test_set_x shape: (50, 64, 64, 3)
test_set_y shape: (1, 50)
###Markdown
**上面这段代码关于m_train, m_test 和 num_px的预期执行结果为**: **m_train** 209 **m_test** 50 **num_px** 64 为了运算方便,这里需要将 (num_px, num_px, 3) 形状的图片转换为 (num_px * num_px * 3) 的numpy数组。转换过后,训练集(测试集)的每一列代表平展之后的图片,总共有 m_train(m_test) 列。
###Code
# Reshape the training and test examples
### START CODE HERE ### (≈ 2 lines of code)
train_set_x_flatten = train_set_x_orig.reshape((m_train, num_px * num_px * 3)).T
test_set_x_flatten = test_set_x_orig.reshape((m_test, num_px * num_px * 3)).T
### END CODE HERE ###
print ("train_set_x_flatten shape: " + str(train_set_x_flatten.shape))
print ("train_set_y shape: " + str(train_set_y.shape))
print ("test_set_x_flatten shape: " + str(test_set_x_flatten.shape))
print ("test_set_y shape: " + str(test_set_y.shape))
print ("sanity check after reshaping: " + str(train_set_x_flatten[0:5,0]))
###Output
train_set_x_flatten shape: (12288, 209)
train_set_y shape: (1, 209)
test_set_x_flatten shape: (12288, 50)
test_set_y shape: (1, 50)
sanity check after reshaping: [17 31 56 22 33]
###Markdown
**预期执行结果**: **train_set_x_flatten shape** (12288, 209) **train_set_y shape** (1, 209) **test_set_x_flatten shape** (12288, 50) **test_set_y shape** (1, 50) **sanity check after reshaping** [17 31 56 22 33] 每个像素的RGB三通道值一起构成了整副图片,因此每个像素实际上是一个由三个数字组成的向量,每个数字的取值范围为0到255。机器学习中一个常见的预处理方法是对数据进行标准化,意思是对numpy数组中的每个特征,减去其均值,再除以标准差。对于图片类型的数据,直接对所有值除以255(像素通道的最大值)会更方便,也同样有效。在模型训练的过程中,首先权重和截距会和初始输入值进行运算来激活神经元,接下来使用梯度下降的反向传播来训练模型。所有特征有大致相当的值域这点十分重要,这样梯度才不会爆炸。后面的课程中会看到更多这样的例子。接下来对数据进行标准化。
###Code
train_set_x = train_set_x_flatten/255.
test_set_x = test_set_x_flatten/255.
###Output
_____no_output_____
###Markdown
**请牢记:**图片数据预处理的常见步骤:- 弄清问题所需要的数据维度和矩阵形状(m_train, m_test, num_px, ...)- 转换数据集的形状使得每个训练样本是一个 (num_px \* num_px \* 3, 1) 的向量- 对数据进行标准化 7.3 学习算法的架构总览这里我们将用神经网络的思维,来构造逻辑回归算法,对猫的图片进行分类。下面这幅图解释了,为什么说逻辑回归实际上是一个非常简单的神经网络。**算法的数学表达**:对于样本 $x^{(i)}$:$$z^{(i)} = w^T x^{(i)} + b \tag{1}$$$$\hat{y}^{(i)} = a^{(i)} = sigmoid(z^{(i)})\tag{2}$$ $$ \mathcal{L}(a^{(i)}, y^{(i)}) = - y^{(i)} \log(a^{(i)}) - (1-y^{(i)} ) \log(1-a^{(i)})\tag{3}$$算法的成本是对所有训练样本的损失求和后取平均:$$ J = \frac{1}{m} \sum_{i=1}^m \mathcal{L}(a^{(i)}, y^{(i)})\tag{6}$$**关键步骤**:在本练习中,需要完成以下步骤: - 初始化模型参数 - 通过最小化成本,学得模型的参数 - 使用学得的参数进行预测(针对测试集数据) - 分析结果并总结 7.4 构建学习算法构建神经网络的主要步骤包括:1. 定义模型结构(比如输入特征的数量)2. 初始化模型参数3. 循环: - 计算当前损失(前向传播) - 计算当前梯度(后向传播) - 更新参数(梯度下降)通常1-3的步骤是单独构建的,最终再把这些组件组合成一个叫做`model()`的函数。 7.4.1 - 辅助函数**练习**: 实现`sigmoid()`函数。根据上面的图片可知,需要计算 $sigmoid( w^T x + b) = \frac{1}{1 + e^{-(w^T x + b)}}$ 来进行预测. 注意使用 np.exp().
###Code
# GRADED FUNCTION: sigmoid
def sigmoid(z):
"""
Compute the sigmoid of z
Arguments:
z -- A scalar or numpy array of any size.
Return:
s -- sigmoid(z)
"""
### START CODE HERE ### (≈ 1 line of code)
s = 1 / (1 + np.exp(-z))
### END CODE HERE ###
return s
print ("sigmoid([0, 2]) = " + str(sigmoid(np.array([0,2]))))
###Output
sigmoid([0, 2]) = [ 0.5 0.88079708]
###Markdown
**预期输出**: **sigmoid([0, 2])** [ 0.5 0.88079708] 7.4.2 - 参数初始化**练习:** 在下面的代码块中,实现参数初始化。需要将w初始化为一列为0的向量。可以使用np.zeros()函数。
###Code
# GRADED FUNCTION: initialize_with_zeros
def initialize_with_zeros(dim):
"""
This function creates a vector of zeros of shape (dim, 1) for w and initializes b to 0.
Argument:
dim -- size of the w vector we want (or number of parameters in this case)
Returns:
w -- initialized vector of shape (dim, 1)
b -- initialized scalar (corresponds to the bias)
"""
### START CODE HERE ### (≈ 1 line of code)
w = np.zeros((dim, 1))
b = 0
### END CODE HERE ###
assert(w.shape == (dim, 1))
assert(isinstance(b, float) or isinstance(b, int))
return w, b
dim = 2
w, b = initialize_with_zeros(dim)
print ("w = " + str(w))
print ("b = " + str(b))
###Output
w = [[ 0.]
[ 0.]]
b = 0
###Markdown
**预期输出**: ** w ** [[ 0.] [ 0.]] ** b ** 0 对于图片输入来说,w将会是(num_px $\times$ num_px $\times$ 3, 1)形状的向量。 7.4.3 - 前向传播和反向传播参数初始化之后,就可以进行前向传播和反向传播过程,来学习参数。**练习:** 实现计算成本及其梯度的函数 `propagate()` 。**提示**:前向传播:- 已有 X- 计算 $A = \sigma(w^T X + b) = (a^{(0)}, a^{(1)}, ..., a^{(m-1)}, a^{(m)})$- 计算成本函数: $J = -\frac{1}{m}\sum_{i=1}^{m}y^{(i)}\log(a^{(i)})+(1-y^{(i)})\log(1-a^{(i)})$下面是计算梯度的两个公式: $$ \frac{\partial J}{\partial w} = \frac{1}{m}X(A-Y)^T\tag{7}$$$$ \frac{\partial J}{\partial b} = \frac{1}{m} \sum_{i=1}^m (a^{(i)}-y^{(i)})\tag{8}$$
###Code
# GRADED FUNCTION: propagate
def propagate(w, b, X, Y):
"""
Implement the cost function and its gradient for the propagation explained above
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat) of size (1, number of examples)
Return:
cost -- negative log-likelihood cost for logistic regression
dw -- gradient of the loss with respect to w, thus same shape as w
db -- gradient of the loss with respect to b, thus same shape as b
Tips:
- Write your code step by step for the propagation. np.log(), np.dot()
"""
m = X.shape[1]
# FORWARD PROPAGATION (FROM X TO COST)
### START CODE HERE ### (≈ 2 lines of code)
A = sigmoid(np.dot(w.T, X) + b) # compute activation
cost = -np.mean(Y * np.log(A) + (1 - Y) * np.log(1 - A)) # compute cost
### END CODE HERE ###
# BACKWARD PROPAGATION (TO FIND GRAD)
### START CODE HERE ### (≈ 2 lines of code)
dw = np.dot(X, (A - Y).T) / m
db = np.mean(A - Y)
### END CODE HERE ###
assert(dw.shape == w.shape)
assert(db.dtype == float)
cost = np.squeeze(cost)
assert(cost.shape == ())
grads = {"dw": dw,
"db": db}
return grads, cost
w, b, X, Y = np.array([[1.],[2.]]), 2., np.array([[1.,2.,-1.],[3.,4.,-3.2]]), np.array([[1,0,1]])
grads, cost = propagate(w, b, X, Y)
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
print ("cost = " + str(cost))
###Output
dw = [[ 0.99845601]
[ 2.39507239]]
db = 0.00145557813678
cost = 5.80154531939
###Markdown
**预期输出**: ** dw ** [[ 0.99845601] [ 2.39507239]] ** db ** 0.00145557813678 ** cost ** 5.801545319394553 d) 优化- 目前已经对参数进行了初始化- 也已经可以计算成本函数及其梯度- 现在,我们希望根据梯度下降进行参数更新**练习:** 完成优化函数。这里的目标是通过最小化成本函数 $J$,学得对应的参数 $w$ 和 $b$。对参数 $\theta$ 来说,梯度下降的更新规则是 $ \theta = \theta - \alpha \text{ } d\theta$, 其中 $\alpha$ 是学习速率。
###Code
# GRADED FUNCTION: optimize
def optimize(w, b, X, Y, num_iterations, learning_rate, print_cost = False):
"""
This function optimizes w and b by running a gradient descent algorithm
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of shape (num_px * num_px * 3, number of examples)
Y -- true "label" vector (containing 0 if non-cat, 1 if cat), of shape (1, number of examples)
num_iterations -- number of iterations of the optimization loop
learning_rate -- learning rate of the gradient descent update rule
print_cost -- True to print the loss every 100 steps
Returns:
params -- dictionary containing the weights w and bias b
grads -- dictionary containing the gradients of the weights and bias with respect to the cost function
costs -- list of all the costs computed during the optimization, this will be used to plot the learning curve.
Tips:
You basically need to write down two steps and iterate through them:
1) Calculate the cost and the gradient for the current parameters. Use propagate().
2) Update the parameters using gradient descent rule for w and b.
"""
costs = []
for i in range(num_iterations):
# Cost and gradient calculation (≈ 1-4 lines of code)
### START CODE HERE ###
grads, cost = propagate(w, b, X, Y)
### END CODE HERE ###
# Retrieve derivatives from grads
dw = grads["dw"]
db = grads["db"]
# update rule (≈ 2 lines of code)
### START CODE HERE ###
w = w - learning_rate * dw
b = b - learning_rate * db
### END CODE HERE ###
# Record the costs
if i % 100 == 0:
costs.append(cost)
# Print the cost every 100 training examples
if print_cost and i % 100 == 0:
print ("Cost after iteration %i: %f" %(i, cost))
params = {"w": w,
"b": b}
grads = {"dw": dw,
"db": db}
return params, grads, costs
params, grads, costs = optimize(w, b, X, Y, num_iterations= 100, learning_rate = 0.009, print_cost = False)
print ("w = " + str(params["w"]))
print ("b = " + str(params["b"]))
print ("dw = " + str(grads["dw"]))
print ("db = " + str(grads["db"]))
###Output
w = [[ 0.19033591]
[ 0.12259159]]
b = 1.92535983008
dw = [[ 0.67752042]
[ 1.41625495]]
db = 0.219194504541
###Markdown
**预期执行结果**: **w** [[ 0.19033591] [ 0.12259159]] **b** 1.92535983008 **dw** [[ 0.67752042] [ 1.41625495]] **db** 0.219194504541 **练习:** 上面的函数会输出最终学得的参数 w 和 b。我们可以使用 w 和 b 来预测数据集 X 的标签。实现 `predict()` 函数,包含两个步骤:1. 计算 $\hat{Y} = A = \sigma(w^T X + b)$2. 讲结果转为0 (如果 激活值 0.5), 讲预测结果存储在向量 `Y_prediction` 中。
###Code
# GRADED FUNCTION: predict
def predict(w, b, X):
'''
Predict whether the label is 0 or 1 using learned logistic regression parameters (w, b)
Arguments:
w -- weights, a numpy array of size (num_px * num_px * 3, 1)
b -- bias, a scalar
X -- data of size (num_px * num_px * 3, number of examples)
Returns:
Y_prediction -- a numpy array (vector) containing all predictions (0/1) for the examples in X
'''
m = X.shape[1]
Y_prediction = np.zeros((1,m))
w = w.reshape(X.shape[0], 1)
# Compute vector "A" predicting the probabilities of a cat being present in the picture
### START CODE HERE ### (≈ 1 line of code)
A = sigmoid(np.dot(w.T, X) + b)
### END CODE HERE ###
for i in range(A.shape[1]):
# Convert probabilities A[0,i] to actual predictions p[0,i]
### START CODE HERE ### (≈ 4 lines of code)
Y_prediction[0][i] = 1 if A[0][i] > 0.5 else 0
### END CODE HERE ###
assert(Y_prediction.shape == (1, m))
return Y_prediction
w = np.array([[0.1124579],[0.23106775]])
b = -0.3
X = np.array([[1.,-1.1,-3.2],[1.2,2.,0.1]])
print ("predictions = " + str(predict(w, b, X)))
###Output
predictions = [[ 1. 1. 0.]]
###Markdown
**预期输出**: **predictions** [[ 1. 1. 0.]] **回顾:**我们实现了几个函数,用来:- 初始化 (w,b)- 迭代地优化损失,从而学得参数 (w,b): - 计算成本及其梯度 - 使用梯度下降更新参数- 使用学得的参数 (w,b) 对预测给定数据集的标签 7.5 将所有函数合并为模型将所有函数按正确的顺序组合在一起,形成模型。**练习:** 实现model函数,采用以下标记: - Y_prediction 表示对测试集的预测值 - Y_prediction_train 表示对训练集的预测值 - w, costs, grads 表示需要优化的输出值
###Code
# GRADED FUNCTION: model
def model(X_train, Y_train, X_test, Y_test, num_iterations = 2000, learning_rate = 0.5, print_cost = False):
"""
Builds the logistic regression model by calling the function you've implemented previously
Arguments:
X_train -- training set represented by a numpy array of shape (num_px * num_px * 3, m_train)
Y_train -- training labels represented by a numpy array (vector) of shape (1, m_train)
X_test -- test set represented by a numpy array of shape (num_px * num_px * 3, m_test)
Y_test -- test labels represented by a numpy array (vector) of shape (1, m_test)
num_iterations -- hyperparameter representing the number of iterations to optimize the parameters
learning_rate -- hyperparameter representing the learning rate used in the update rule of optimize()
print_cost -- Set to true to print the cost every 100 iterations
Returns:
d -- dictionary containing information about the model.
"""
### START CODE HERE ###
# initialize parameters with zeros (≈ 1 line of code)
w, b = initialize_with_zeros(X_train.shape[0])
# Gradient descent (≈ 1 line of code)
parameters, grads, costs = optimize(w, b, X_train, Y_train, num_iterations, learning_rate, print_cost)
# Retrieve parameters w and b from dictionary "parameters"
w = parameters["w"]
b = parameters["b"]
# Predict test/train set examples (≈ 2 lines of code)
Y_prediction_test = predict(w, b, X_test)
Y_prediction_train = predict(w, b, X_train)
### END CODE HERE ###
# Print train/test Errors
print("train accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_train - Y_train)) * 100))
print("test accuracy: {} %".format(100 - np.mean(np.abs(Y_prediction_test - Y_test)) * 100))
d = {"costs": costs,
"Y_prediction_test": Y_prediction_test,
"Y_prediction_train" : Y_prediction_train,
"w" : w,
"b" : b,
"learning_rate" : learning_rate,
"num_iterations": num_iterations}
return d
###Output
_____no_output_____
###Markdown
运行下面的代码块来训练模型
###Code
d = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 2000, learning_rate = 0.005, print_cost = True)
###Output
Cost after iteration 0: 0.693147
Cost after iteration 100: 0.584508
Cost after iteration 200: 0.466949
Cost after iteration 300: 0.376007
Cost after iteration 400: 0.331463
Cost after iteration 500: 0.303273
Cost after iteration 600: 0.279880
Cost after iteration 700: 0.260042
Cost after iteration 800: 0.242941
Cost after iteration 900: 0.228004
Cost after iteration 1000: 0.214820
Cost after iteration 1100: 0.203078
Cost after iteration 1200: 0.192544
Cost after iteration 1300: 0.183033
Cost after iteration 1400: 0.174399
Cost after iteration 1500: 0.166521
Cost after iteration 1600: 0.159305
Cost after iteration 1700: 0.152667
Cost after iteration 1800: 0.146542
Cost after iteration 1900: 0.140872
train accuracy: 99.04306220095694 %
test accuracy: 70.0 %
###Markdown
**预期执行结果**: **Cost after iteration 0 ** 0.693147 $\vdots$ $\vdots$ **Train Accuracy** 99.04306220095694 % **Test Accuracy** 70.0 % **注释**: 训练集准确率接近100%,这是一个很好的检查:我们的模型运转正常,有足够的能力来你和训练数据。测试的准确率为70%,对于这样一个简单的模型来说,考虑到我们的数据集很小而逻辑回归又是线性分类器,这个成绩也不算太坏。但不用担心,后面我们会构建更好的分类器。另外,显然我们的模型已经对训练数据过拟合了。在后面的课程中,我们将会学习到如何控制过拟合,比如使用正则化。使用下面的代码块(可以修改`index`),对测试集数据的预测进行可视化。
###Code
# Example of a picture that was wrongly classified.
index = 5
plt.imshow(test_set_x[:,index].reshape((num_px, num_px, 3)))
print("y = " + str(test_set_y[0,index]) + ", you predicted that it is a \"" + classes[int(d["Y_prediction_test"][0,index])].decode("utf-8") + "\" picture.")
###Output
y = 0, you predicted that it is a "cat" picture.
###Markdown
同样地,我们也对成本函数和梯度进行可视化
###Code
# Plot learning curve (with costs)
costs = np.squeeze(d['costs'])
plt.plot(costs)
plt.ylabel('cost')
plt.xlabel('iterations (per hundreds)')
plt.title("Learning rate =" + str(d["learning_rate"]))
plt.show()
###Output
_____no_output_____
###Markdown
**解释**:可以看到成本一直在下降,这表明参数一直在按照我们的预想进行学习。而从趋势上,训练集的成本还可以进一步降低。试着增加上面代码块中的迭代次数并重新执行。你可能会发现,训练集的准确率进一步上升了,而测试集的准确率进一步下降。这种现象叫做过拟合。 7.6 进一步的分析 学习速率的选择**提示**:为了保证梯度下降顺利执行,首先需要合理地选择学习速率。学习速率 $\alpha$ 决定了更新参数的幅度。当学习速率过大时,可能会适得其反。而当学习速率很小时,则会耗费更多轮次的迭代来收敛到最终数值。因此选择合适的学习速率十分关键。我们来比较一下不同的学习速率所对应的学习曲线。执行下面的代码块,你也可以修改不同的学习速率值`learning_rates`,来看对应的情况。
###Code
learning_rates = [0.01, 0.001, 0.0001]
models = {}
for i in learning_rates:
print ("learning rate is: " + str(i))
models[str(i)] = model(train_set_x, train_set_y, test_set_x, test_set_y, num_iterations = 1500, learning_rate = i, print_cost = False)
print ('\n' + "-------------------------------------------------------" + '\n')
for i in learning_rates:
plt.plot(np.squeeze(models[str(i)]["costs"]), label= str(models[str(i)]["learning_rate"]))
plt.ylabel('cost')
plt.xlabel('iterations')
legend = plt.legend(loc='upper center', shadow=True)
frame = legend.get_frame()
frame.set_facecolor('0.90')
plt.show()
###Output
learning rate is: 0.01
train accuracy: 99.52153110047847 %
test accuracy: 68.0 %
-------------------------------------------------------
learning rate is: 0.001
train accuracy: 88.99521531100478 %
test accuracy: 64.0 %
-------------------------------------------------------
learning rate is: 0.0001
train accuracy: 68.42105263157895 %
test accuracy: 36.0 %
-------------------------------------------------------
|
module_6/challenge/main.ipynb | ###Markdown
Desafio 5Neste desafio, vamos praticar sobre redução de dimensionalidade com PCA e seleção de variáveis com RFE. Utilizaremos o _data set_ [Fifa 2019](https://www.kaggle.com/karangadiya/fifa19), contendo originalmente 89 variáveis de mais de 18 mil jogadores do _game_ FIFA 2019.> Obs.: Por favor, não modifique o nome das funções de resposta. _Setup_ geral
###Code
from math import sqrt
import pandas as pd
import matplotlib.pyplot as plt
import numpy as np
import scipy.stats as sct
import seaborn as sns
import statsmodels.api as sm
import statsmodels.stats as st
from sklearn.decomposition import PCA
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
#from loguru import logger
# Algumas configurações para o matplotlib.
#%matplotlib inline
from IPython.core.pylabtools import figsize
figsize(12, 8)
sns.set()
fifa = pd.read_csv("fifa.csv")
columns_to_drop = ["Unnamed: 0", "ID", "Name", "Photo", "Nationality", "Flag",
"Club", "Club Logo", "Value", "Wage", "Special", "Preferred Foot",
"International Reputation", "Weak Foot", "Skill Moves", "Work Rate",
"Body Type", "Real Face", "Position", "Jersey Number", "Joined",
"Loaned From", "Contract Valid Until", "Height", "Weight", "LS",
"ST", "RS", "LW", "LF", "CF", "RF", "RW", "LAM", "CAM", "RAM", "LM",
"LCM", "CM", "RCM", "RM", "LWB", "LDM", "CDM", "RDM", "RWB", "LB", "LCB",
"CB", "RCB", "RB", "Release Clause"
]
try:
fifa.drop(columns_to_drop, axis=1, inplace=True)
except KeyError:
logger.warning(f"Columns already dropped")
###Output
_____no_output_____
###Markdown
Inicia sua análise a partir daqui
###Code
fifa.head()
fifa.shape
fifa.columns
fifa.isnull().sum()
fifa.dropna(inplace=True)
fifa.isnull().sum()
fifa.shape
###Output
_____no_output_____
###Markdown
Questão 1Qual fração da variância consegue ser explicada pelo primeiro componente principal de `fifa`? Responda como um único float (entre 0 e 1) arredondado para três casas decimais.
###Code
def q1():
pca = PCA()
pca.fit_transform(fifa)
return float(pca.explained_variance_ratio_[0].round(3))
q1()
###Output
_____no_output_____
###Markdown
Questão 2Quantos componentes principais precisamos para explicar 95% da variância total? Responda como un único escalar inteiro.
###Code
def q2():
pca = PCA(0.95)
return pca.fit_transform(fifa).shape[1]
q2()
###Output
_____no_output_____
###Markdown
Questão 3Qual são as coordenadas (primeiro e segundo componentes principais) do ponto `x` abaixo? O vetor abaixo já está centralizado. Cuidado para __não__ centralizar o vetor novamente (por exemplo, invocando `PCA.transform()` nele). Responda como uma tupla de float arredondados para três casas decimais.
###Code
x = [0.87747123, -1.24990363, -1.3191255, -36.7341814,
-35.55091139, -37.29814417, -28.68671182, -30.90902583,
-42.37100061, -32.17082438, -28.86315326, -22.71193348,
-38.36945867, -20.61407566, -22.72696734, -25.50360703,
2.16339005, -27.96657305, -33.46004736, -5.08943224,
-30.21994603, 3.68803348, -36.10997302, -30.86899058,
-22.69827634, -37.95847789, -22.40090313, -30.54859849,
-26.64827358, -19.28162344, -34.69783578, -34.6614351,
48.38377664, 47.60840355, 45.76793876, 44.61110193,
49.28911284
]
def q3():
pca = PCA(n_components=2)
pca.fit(fifa)
return tuple(pca.components_.dot(x).round(3))
q3()
###Output
_____no_output_____
###Markdown
Questão 4Realiza RFE com estimador de regressão linear para selecionar cinco variáveis, eliminando uma a uma. Quais são as variáveis selecionadas? Responda como uma lista de nomes de variáveis.
###Code
def q4():
reg= LinearRegression()
rfe = RFE(reg, n_features_to_select=5)
X = fifa.drop(['Overall'], axis=1)
y = fifa['Overall']
rfe.fit(X, y)
selected_features = pd.DataFrame({'column':X.columns, 'bool': rfe.get_support()})
return selected_features[selected_features['bool'] == True]['column'].to_list()
q4()
###Output
_____no_output_____ |
Car Racing Environment.ipynb | ###Markdown
**Installing the relevant libraries.**
###Code
!pip install gym pyvirtualdisplay
!apt-get install -y xvfb python-opengl ffmpeg
!apt-get update
!apt-get install cmake
!pip install --upgrade setuptools
!pip install ez_setup
!pip install gym[box2d]
###Output
_____no_output_____
###Markdown
**Importing the relevant libraries.**
###Code
import gym
import numpy as np
import matplotlib.pyplot as plt
import random
import cv2
from keras.models import Sequential
from keras.layers import Dense, Dropout, Flatten
from keras.optimizers import Adamax
from keras.layers import Convolution2D
from gym import logger as gymlogger
from gym.wrappers import Monitor
gymlogger.set_level(40)
import glob
import io
import base64
from IPython import display as ipythondisplay
from IPython.display import HTML
from pyvirtualdisplay import Display
from IPython.display import clear_output
###Output
_____no_output_____
###Markdown
**Defining the display to render the openai gym environment.**
###Code
display = Display(visible=0, size=(1400, 900))
display.start()
###Output
_____no_output_____
###Markdown
**Wrapping the environment.**
###Code
def wrap_env(env):
env = Monitor(env, './video', force=True)
return env
env = wrap_env(gym.make("CarRacing-v0"))
observation = env.reset()
###Output
Track generation: 907..1144 -> 237-tiles track
###Markdown
**Defining a function to transform the observed image into a usable image for our convolutional neural network.**
###Code
def transform(obs):
top = obs[:84, 6:90]
top = cv2.cvtColor(top, cv2.COLOR_RGB2GRAY)
top = cv2.threshold(top, 120, 255, cv2.THRESH_BINARY)[1]
top = top.astype('float')/255
return top
###Output
_____no_output_____
###Markdown
**Defining a function to convert the output from our convolutional neural network into an action to feed the environment.**
###Code
def output_to_action(output_value):
gas = 0.0
brake = 0.0
steering = 0.0
if output_value <= 4:
output_value -= 2
steering = float(output_value)/2
elif output_value == 5:
output_value -= 4
gas = float(output_value)/3
elif output_value == 6:
output_value -= 5
brake = float(output_value)/2
else:
print("error")
return [steering, gas, brake]
###Output
_____no_output_____
###Markdown
**Defining our convolutional neural network.**
###Code
def neuralnet():
model = Sequential()
model.add(Convolution2D(24, 5, 5, subsample=(2, 2), input_shape=(84, 84, 1), activation='elu'))
model.add(Convolution2D(36, 5, 5, subsample=(2, 2), activation='elu'))
model.add(Convolution2D(48, 3, 3, activation='elu'))
model.add(Flatten())
model.add(Dense(50, activation='elu'))
model.add(Dense(25, activation='elu'))
model.add(Dense(7, activation = 'linear'))
adamax = Adamax()
model.compile(loss='mse', optimizer = adamax)
model.summary()
return model
class Model:
def __init__(self, env):
self.env = env
self.model = neuralnet()
def predict(self, state):
return self.model.predict(state.reshape(1, 84, 84, 1), verbose=0)[0]
def update(self, state, G):
self.model.fit(state.reshape(1, 84, 84, 1), np.array(G).reshape(-1, 7), epochs=1, verbose=0)
def sample_action(self, state, eps):
qval = self.predict(state)
if np.random.random() < eps:
return random.randint(0, 6), qval
else:
return np.argmax(qval), qval
###Output
_____no_output_____
###Markdown
**Defining a function to play a video of the rendering of each episode to get a visual representation of our agent learning.**
###Code
def show_video():
mp4list = glob.glob('video/*.mp4')
if len(mp4list) > 0:
mp4 = mp4list[0]
video = io.open(mp4, 'r+b').read()
encoded = base64.b64encode(video)
ipythondisplay.display(HTML(data='''<video alt="test" autoplay
controls style="height: 400px;">
<source src="data:video/mp4;base64,{0}" type="video/mp4" />
</video>'''.format(encoded.decode('ascii'))))
else:
print("Could not find video")
###Output
_____no_output_____
###Markdown
**Defining a function to play an episode of the car racing environment, and to update the parameters of our network to allow it to make better control decisions in the future.**
###Code
def play_one(env, model, eps, gamma):
done = False
full_reward_received = False
totalreward = 0
iters = 0
observation = env.reset()
clear_output(wait=True)
while not done:
env.render()
state = transform(observation)
qval_max, qval = model.sample_action(state, eps)
prev_state = state
action = output_to_action(qval_max)
observation, reward, done, info = env.step(action)
state = transform(observation)
qval_next = model.predict(state)
G = reward + gamma*np.max(qval_next)
y = qval[:]
y[qval_max] = G
model.update(prev_state, y)
totalreward += reward
iters += 1
return totalreward, iters
###Output
_____no_output_____
###Markdown
**When executed, this block of code plays an N number of episodes and renders each episode for visualization purposes. The neural network also learns from each episode that it plays.**
###Code
N = 1
totalrewards = np.empty(N)
model = Model(env)
eps = 0.15
gamma = 0.95
for n in range(N):
env = wrap_env(gym.make("CarRacing-v0"))
totalreward, iters = play_one(env, model, eps, gamma)
totalrewards[n] = totalreward
print("Episode:", n, ", Iters", iters, ", Total Reward:", totalreward, "\n")
env.close()
show_video()
!rm -r video
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/tensorflow/python/ops/math_ops.py:3066: to_int32 (from tensorflow.python.ops.math_ops) is deprecated and will be removed in a future version.
Instructions for updating:
Use tf.cast instead.
Episode: 0 , Iters 1000 , Total Reward: 24.9999999999989
|
.ipynb_checkpoints/XGBoost_AFL_Prediction_Preprocessing_2021-checkpoint.ipynb | ###Markdown
Load match related data
###Code
df_2014 = pd.read_csv("data/afl_results_2014.csv")
print(df_2014.shape)
df_2015 = pd.read_csv("data/afl_results_2015.csv")
print(df_2015.shape)
df_2016 = pd.read_csv("data/afl_results_2016.csv")
print(df_2016.shape)
df_2017 = pd.read_csv("data/afl_results_2017.csv")
print(df_2017.shape)
df_2018 = pd.read_csv("data/afl_results_2018.csv")
print(df_2018.shape)
df_2019 = pd.read_csv("data/afl_results_2019.csv")
print(df_2019.shape)
df_2020 = pd.read_csv("data/afl_results_2020.csv")
print(df_2020.shape)
df_2021 = pd.read_csv("data/afl_results_2021.csv")
print(df_2021.shape)
df_2022 = pd.read_csv("data/afl_results_2022.csv")
print(df_2022.shape)
df_all = pd.concat([df_2014, df_2015, df_2016, df_2018, df_2019, df_2020, df_2021, df_2022], axis=0)
print(df_all.shape)
df_all.columns
df_fixture = pd.read_csv("data/fixture_2022.csv")
print(df_fixture.shape)
df_fixture.columns
df_next_games_teams = df_fixture[(df_fixture['status'] != "CONCLUDED") & (df_fixture['round.roundNumber'] == 2)]
df_next_games_teams = df_next_games_teams[['home.team.name','away.team.name']]
df_next_games_teams = df_next_games_teams.rename(columns={'home.team.name': 'match.homeTeam.name', 'away.team.name': 'match.awayTeam.name'})
df_next_games_teams
df_all.shape
df_all.sort_values('match.date', inplace=True)
df_all.reset_index(inplace=True)
df_all.drop('index', axis=1, inplace=True)
df_all.tail()
from pandas.plotting import scatter_matrix
scatter_matrix(df_all[df_all.iloc[:,50:56].columns], diagonal='kde', figsize=(14,14));
# HTGDIFF: Home Team Goal Difference
# ATGDIFF: Away Team Goal Difference
df_all['HTGDIFF'] = df_all['homeTeamScore.matchScore.goals'] - df_all['awayTeamScore.matchScore.goals']
df_all['ATGDIFF'] = df_all['awayTeamScore.matchScore.goals'] - df_all['homeTeamScore.matchScore.goals']
###Output
_____no_output_____
###Markdown
Calculate AVG goal difference for home and away team rolling 4 Games
###Code
def avg_goal_diff(df, avg_h_a_diff, a_h_team, a_h_goal_letter):
"""
input:
df = dataframe with all results
avg_h_a_diff = name of the new column
a_h_team = HomeTeam or AwayTeam
a_h_goal_letter = 'H' for home or 'A' for away
output:
avg_per_team = dictionary with with team as key and columns as values with new column H/ATGDIFF
"""
df[avg_h_a_diff] = 0
avg_per_team = {}
all_teams = df[a_h_team].unique()
for t in all_teams:
df_team = df[df[a_h_team]==t].fillna(0)
result = df_team['{}TGDIFF'.format(a_h_goal_letter)].rolling(4).mean()
df_team[avg_h_a_diff] = result
avg_per_team[t] = df_team
return avg_per_team
d_AVGFTHG = avg_goal_diff(df_all, 'AVGHTGDIFF', 'match.homeTeam.name', 'H')
def from_dict_value_to_df(d):
"""
input = dictionary
output = dataframe as part of all the values from the dictionary
"""
df = pd.DataFrame()
for v in d.values():
df = pd.concat([df,v])
return df
df_AVGFTHG = from_dict_value_to_df(d_AVGFTHG)
df_AVGFTHG.sort_index(inplace=True)
d_AVGFTAG = avg_goal_diff(df_AVGFTHG, 'AVGATGDIFF', 'match.awayTeam.name', 'A')
df_all = from_dict_value_to_df(d_AVGFTAG)
df_all.sort_index(inplace=True)
df_all['AVGATGDIFF'].fillna(0, inplace=True)
###Output
_____no_output_____
###Markdown
Add per match game results from last three games
###Code
df_all['goal_diff'] = df_all['homeTeamScore.matchScore.goals'] - df_all['awayTeamScore.matchScore.goals']
for index, row in df_all[df_all['match.status']=='CONCLUDED'].iterrows():
if df_all['goal_diff'][index] > 0:
df_all.at[index,'result'] = 3 # 3 is a win
elif df_all['goal_diff'][index] == 0:
df_all.at[index,'result'] = 2 # 2 is a draw
else:
df_all.at[index,'result'] = 1 # 1 is a loss
df_all.head()
def previous_data(df, h_or_a_team, column, letter, past_n):
"""
input:
df = dataframe with all results
a_h_team = HomeTeam or AwayTeam
column = column selected to get previous data from
output:
team_with_past_dict = dictionary with team as a key and columns as values with new
columns with past value
"""
d = dict()
team_with_past_dict = dict()
all_teams = df[h_or_a_team].unique()
for team in all_teams:
n_games = len(df[df[h_or_a_team]==team])
team_with_past_dict[team] = df[df[h_or_a_team]==team]
for i in range(1, past_n):
d[i] = team_with_past_dict[team].assign(
result=team_with_past_dict[team].groupby(h_or_a_team)[column].shift(i)
).fillna({'{}_X'.format(column): 0})
team_with_past_dict[team]['{}_{}_{}'.format(letter, column, i)] = d[i].result
return team_with_past_dict
def previous_data_call(df, side, column, letter, iterations):
d = previous_data(df, side, column, letter, iterations)
df_result= from_dict_value_to_df(d)
df_result.sort_index(inplace=True)
return df_result
df_last_home_results = previous_data_call(df_all, 'match.homeTeam.name', 'result', 'H', 3)
df_last_away_results = previous_data_call(df_last_home_results, 'match.awayTeam.name', 'result', 'A', 3)
df_last_last_HTGDIFF_results = previous_data_call(df_last_away_results, 'match.homeTeam.name', 'HTGDIFF', 'H', 3)
df_last_last_ATGDIFF_results = previous_data_call(df_last_last_HTGDIFF_results, 'match.awayTeam.name', 'ATGDIFF', 'A', 3)
df_last_AVGFTHG_results = previous_data_call(df_last_last_ATGDIFF_results, 'match.homeTeam.name', 'AVGHTGDIFF', 'H', 2)
df_last_AVGFTAG_results = previous_data_call(df_last_AVGFTHG_results, 'match.awayTeam.name', 'AVGATGDIFF', 'A', 2)
df_all = df_last_AVGFTAG_results.copy()
df_all.shape
df_all
df_matches_numeric = df_all._get_numeric_data()
df_matches_numeric.columns
#df_matches_numeric.drop(['match.homeTeam.timeZone', 'match.awayTeam.timeZone', 'goal_diff', 'result', 'homeTeamScore.matchScore.goals', 'awayTeamScore.matchScore.goals'], axis=1, inplace=True)
df_matches_numeric = df_matches_numeric[['HTGDIFF','ATGDIFF','awayTeamScore.minutesInFront','homeTeamScore.minutesInFront','homeTeamScoreChart.goals','homeTeamScore.matchScore.totalScore','awayTeamScore.matchScore.totalScore','AVGHTGDIFF','round.year','awayTeamScoreChart.goals']]
df_matches_numeric.isnull().sum(axis = 0)
df_norm = (df_matches_numeric - df_matches_numeric.min()) / (df_matches_numeric.max() - df_matches_numeric.min())
df_norm.columns
predictable_columns = [
'HTGDIFF', 'ATGDIFF', 'awayTeamScore.minutesInFront',
'homeTeamScore.minutesInFront', 'homeTeamScoreChart.goals',
'homeTeamScore.matchScore.totalScore',
'awayTeamScore.matchScore.totalScore', 'AVGHTGDIFF', 'round.year',
'awayTeamScoreChart.goals'
]
df_X = df_norm[predictable_columns]
df_X.fillna(0,inplace=True)
# Normal Rounds have 9 games
# Round 24 has 4 games
# Round 25 and 26 have 2 games
# Round 27 has 1 game
# + 9 per match day for normal rounds
#int_for_test = len(df_all)
#int_for_prediction = int_for_test - 9
df_all.shape
#X = df_X.iloc[:int_for_prediction,:]
#print(X.shape)
#Y = df_all.iloc[:int_for_prediction,:]['result']
#print(Y.shape)
#Z = df_X.iloc[int_for_prediction:,:]
#print(Z.shape)
X = df_X
print(X.shape)
Y = df_all['result']
print(Y.shape)
#Z = df_X.iloc[int_for_prediction:,:]
#print(Z.shape)
#X = df_X.iloc[:int_for_prediction,:]
#print(X.shape)
#Y = df_all.iloc[:int_for_prediction,:]['result']
#print(Y.shape)
#Z = df_X.iloc[int_for_prediction:,:]
#print(Z.shape)
#df_next_games_teams = df_all.iloc[int_for_prediction:,:][['match.homeTeam.name', 'match.awayTeam.name']]
#print(df_next_games_teams.shape)
#df_next_games_teams
#df_all[['match.name','result']].tail(9)
df_next_games_teams
# how to make Z for test data
# loop through each new fixture team and get average of historical data? try this
Z = pd.DataFrame()
for index, row in df_next_games_teams.iterrows():
home = row['match.homeTeam.name']
away = row['match.awayTeam.name']
tmp = df_all[(df_all['match.homeTeam.name']==home)&(df_all['match.awayTeam.name']==away)]
tmp = tmp[predictable_columns].mean()
#print("-----------")
#print(tmp)
#print("-----------")
Z = Z.append({'HTGDIFF': tmp[0], 'ATGDIFF': tmp[1], 'awayTeamScore.minutesInFront': tmp[2], 'homeTeamScore.minutesInFront': tmp[3], 'homeTeamScoreChart.goals': tmp[4], 'homeTeamScore.matchScore.totalScore': tmp[5], 'awayTeamScore.matchScore.totalScore': tmp[6], 'AVGHTGDIF': tmp[7], 'round.year': 2022, 'awayTeamScoreChart.goals': tmp[9]}, ignore_index=True)
Z
X.to_pickle("pickle_files/X.pkl")
Y.to_pickle("pickle_files/Y.pkl")
Z.to_pickle("pickle_files/Z.pkl")
df_next_games_teams.to_pickle("pickle_files/next_games.pkl")
###Output
_____no_output_____ |
examples/notebooks/WWW/maximise_minimum_SINR_BV4.20.ipynb | ###Markdown
Power Assignment in a Wireless Communication Systemby Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggsfrom Boyd and Vandenberghe, Convex Optimization, exercise 4.20 page 196Convex optimization can be used to maximise the minimum signal to inteference plus noise ratio (SINR) of a wireless communication system. Consider a system with $n$ transmitters, each with power $p_j \geq 0$, transmitting to $n$ receivers. Let $G_{ij} \geq 0$ denote the path gain from transmitter $j$ to receiver $i$. These path gains form the matrix $G \in \mathbb{R}^{n \times n}$.Each receiver is assigned to a transmitter such that the signal power at receiver $i$, $S_i = G_{ii}p_i$ and the interefence power at receiver $i$ is $I_i = \sum_{k\neq i} G_{ik}p_k$. Given a noise power $\sigma_i$ at each receiver, the SINR at receiver $i$, $\gamma_i = \frac{S_i}{I_i + \sigma_i}$.The objective is to maximise the minimum SINR of the system under certain power constraints. These constraints are:i - Each transmitter power $p_j \leq P_j^{\text{max}}$ii - If the transmitters are partitioned into $m$ nonoverlapping groups, $K_1, ..., K_m$, which share a common power supply with total power $P_l^{\text{gp}}$: $\sum_{k\in K_l}p_k \leq P_l^{\text{gp}}$.iii - There is a maximum power that each receiver can receive $P_i^{\text{rc}}$, $\sum_{k=1}^{n}G_{ik}p_k \leq P_i^{\text{rc}}$.The objective function can be rewritten as:minimise $\max_{i=1,...,n}\frac{I_i + \sigma_i}{S_i}$However, since this is a quasiconvex objective function we cannot solve it directly using CVXPY. Instead we must use a bisection method. First we take the step of rewriting the objective, $\alpha = \gamma^{-1} \geq 0$, as a constraint:$I_i+\sigma_i \leq S_i\alpha$Then we choose initial lower and upper bounds $L_0$ and $U_0$ for $\alpha$, which should be chosen such that $L < \alpha^* < U$, where $\alpha^*$ is the optimal value of $\alpha$. Starting with an initial value $\alpha_0 = \frac{1}{2}(L_0+U_0)$, feasibility is checked for $\alpha_0$ by using an arbitrary objective function. The new upper and lower bounds are determined from the feasibility:If $\alpha_0$ is feasible then $L_1 = L_0$, $U_1 = \alpha_0$ and $\alpha_1 = \frac{1}{2}(L_1+U_1)$.If $\alpha_0$ is infeasible then $L_1 = \alpha_1$, $U_1 = U_0$ and $\alpha_1 = \frac{1}{2}(L_1+U_1)$.This bisection process is repeated until $U_N - L_N < \epsilon$, where $\epsilon$ is the desired tolerance.
###Code
#!/usr/bin/env python3
# @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs
import cvxpy as cp
import numpy as np
def maxmin_sinr(G, P_max, P_received, sigma, Group, Group_max, epsilon = 0.001):
# find n and m from the size of the path gain matrix
n, m = np.shape(G)
# Checks sizes of inputs
if m != np.size(P_max):
print('Error: P_max dimensions do not match gain matrix dimensions\n')
return 'Error: P_max dimensions do not match gain matrix dimensions\n', np.nan, np.nan, np.nan
if n != np.size(P_received):
print('Error: P_received dimensions do not match gain matrix dimensions\n')
return 'Error: P_received dimensions do not match gain matrix dimensions', np.nan, np.nan, np.nan
if n != np.size(sigma):
print('Error: σ dimensions do not match gain matrix dimensions\n')
return 'Error: σ dimensions do not match gain matrix dimensions', np.nan, np.nan, np.nan
#I = np.zeros((n,m))
#S = np.zeros((n,m))
delta = np.identity(n)
S = G*delta # signal power matrix
I = G-S # interference power matrix
# group matrix: number of groups by number of transmitters
num_groups = int(np.size(Group,0))
if num_groups != np.size(Group_max):
print('Error: Number of groups from Group matrix does not match dimensions of Group_max\n')
return ('Error: Number of groups from Group matrix does not match dimensions of Group_max',
np.nan, np.nan, np.nan, np.nan)
# normalising the max power of a group so it is in the range [0,1]
Group_norm = Group/np.sum(Group,axis=1).reshape((num_groups,1))
# create scalar optimisation variable p: the power of the n transmitters
p = cp.Variable(shape=n)
best = np.zeros(n)
# set upper and lower bounds for sub-level set
u = 1e4
l = 0
# alpha defines the sub-level sets of the generalised linear fractional problem
# in this case α is the reciprocal of the minimum SINR
alpha = cp.Parameter(shape=1)
# set up the constraints for the bisection feasibility test
constraints = [I*p + sigma <= alpha*S*p, p <= P_max, p >= 0, G*p <= P_received, Group_norm*p <= Group_max]
# define objective function, in our case it's constant as only want to test the solution's feasibility
obj = cp.Minimize(alpha)
# now check whether the solution lies between u and l
alpha.value = [u]
prob = cp.Problem(obj, constraints)
prob.solve()
if prob.status != 'optimal':
# in this case the level set u is below the solution
print('No optimal solution within bounds\n')
return 'Error: no optimal solution within bounds', np.nan, np.nan, np.nan
alpha.value = [l]
prob = cp.Problem(obj, constraints)
prob.solve()
if prob.status == 'optimal':
# in this case the level set l is below the solution
print('No optimal solution within bounds\n')
return 'Error: no optimal solution within bounds', np.nan, np.nan, np.nan
# Bisection algortithm starts
maxLoop = int(1e7)
for i in range(1,maxLoop):
# First check that u is in the feasible domain and l is not, loop finishes here if this is not the case
# set α as the midpoint of the interval
alpha.value = np.atleast_1d((u + l)/2.0)
# test the size of the interval against the specified tolerance
if u-l <= epsilon:
break
# form and solve problem
prob = cp.Problem(obj, constraints)
prob.solve()
# If the problem is feasible u -> α, if not l -> α, best takes the last feasible value as the optimal one as
# when the tolerance is reached the new α may be out of bounds
if prob.status == 'optimal':
u = alpha.value
best = p.value
else:
l = alpha.value
# final condition to check that the interval has converged to order ε, i.e. the range of the optimal sublevel set is <=ε
if u - l > epsilon and i == (maxLoop-1):
print("Solution not converged to order epsilon")
return l, u, float(alpha.value), best
###Output
_____no_output_____
###Markdown
ExampleAs a simple example, we will consider a case with $n=5$, where $G_{ij} = 0.6$ if $i=j$ and $0.1$ otherwise. $P_j^{\text{max}} = 1$ for all transmitters and the transmitters are split into two groups, each with $P_l^{\text{gp}} = 1.8$. The first group contains transmitters 1 & 2, while the second group contains 3,4 & 5.For all receivers $P_i^{\text{rc}} = 4$ and $\sigma_i = 0.1$.
###Code
np.set_printoptions(precision=3)
# in this case we will use a gain matrix with a signal weight of 0.6 and interference weight of 0.1
G = np.array([[0.6,0.1,0.1,0.1,0.1],
[0.1,0.6,0.1,0.1,0.1],
[0.1,0.1,0.6,0.1,0.1],
[0.1,0.1,0.1,0.6,0.1],
[0.1,0.1,0.1,0.1,0.6]])
# in this case m=n, but this generalises if we want n receivers and m transmitters
n, m = np.shape(G)
# set maximum power of each transmitter and receiver saturation level
P_max = np.array([1.]*n)
# normalised received power, total possible would be all power from all transmitters so 1/n
P_received = np.array([4.,4.,4.,4.,4.])/n
# set noise level
sigma = np.array([0.1,0.1,0.1,0.1,0.1])
# group matrix: number of groups by number of transmitters
Group = np.array([[1.,1.,0,0,0],[0,0,1.,1.,1.]])
# max normalised power for groups, number of groups by 1
Group_max = np.array([1.8,1.8])
# now run the optimisation problem
l, u, alpha, best = maxmin_sinr(G, P_max, P_received, sigma, Group, Group_max)
print('Minimum SINR={:.4g}'.format(1/alpha))
print('Power={}'.format(best))
###Output
Minimum SINR=1.148
Power=[0.8 0.8 0.8 0.8 0.8]
###Markdown
Power Assignment in a Wireless Communication Systemby Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggsfrom Boyd and Vandenberghe, Convex Optimization, exercise 4.20 page 196Convex optimization can be used to maximise the minimum signal to inteference plus noise ratio (SINR) of a wireless communication system. Consider a system with $n$ transmitters, each with power $p_j \geq 0$, transmitting to $n$ receivers. Let $G_{ij} \geq 0$ denote the path gain from transmitter $j$ to receiver $i$. These path gains form the matrix $G \in \mathbb{R}^{n \times n}$.Each receiver is assigned to a transmitter such that the signal power at receiver $i$, $S_i = G_{ii}p_i$ and the interefence power at receiver $i$ is $I_i = \sum_{k\neq i} G_{ik}p_k$. Given a noise power $\sigma_i$ at each receiver, the SINR at receiver $i$, $\gamma_i = \frac{S_i}{I_i + \sigma_i}$.The objective is to maximise the minimum SINR of the system under certain power constraints. These constraints are:i - Each transmitter power $p_j \leq P_j^{\text{max}}$ii - If the transmitters are partitioned into $m$ nonoverlapping groups, $K_1, ..., K_m$, which share a common power supply with total power $P_l^{\text{gp}}$: $\sum_{k\in K_l}p_k \leq P_l^{\text{gp}}$.iii - There is a maximum power that each receiver can receive $P_i^{\text{rc}}$, $\sum_{k=1}^{n}G_{ik}p_k \leq P_i^{\text{rc}}$.The objective function can be rewritten as:minimise $\max_{i=1,...,n}\frac{I_i + \sigma_i}{S_i}$However, since this is a quasiconvex objective function we cannot solve it directly using CVXPY. Instead we must use a bisection method. First we take the step of rewriting the objective, $\alpha = \gamma^{-1} \geq 0$, as a constraint:$I_i+\sigma_i \leq S_i\alpha$Then we choose initial lower and upper bounds $L_0$ and $U_0$ for $\alpha$, which should be chosen such that $L < \alpha^* < U$, where $\alpha^*$ is the optimal value of $\alpha$. Starting with an initial value $\alpha_0 = \frac{1}{2}(L_0+U_0)$, feasibility is checked for $\alpha_0$ by using an arbitrary objective function. The new upper and lower bounds are determined from the feasibility:If $\alpha_0$ is feasible then $L_1 = L_0$, $U_1 = \alpha_0$ and $\alpha_1 = \frac{1}{2}(L_1+U_1)$.If $\alpha_0$ is infeasible then $L_1 = \alpha_1$, $U_1 = U_0$ and $\alpha_1 = \frac{1}{2}(L_1+U_1)$.This bisection process is repeated until $U_N - L_N < \epsilon$, where $\epsilon$ is the desired tolerance.
###Code
#!/usr/bin/env python3
# @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs
import cvxpy as cvx
import numpy as np
def maxmin_sinr(G,P_max,P_received,sigma,Group,Group_max,epsilon = 0.001):
# find n and m from the size of the path gain matrix
n,m = np.shape(G)
# Checks sizes of inputs
if m != np.size(P_max):
print('Error: P_max dimensions do not match gain matrix dimensions\n')
return 'Error: P_max dimensions do not match gain matrix dimensions\n',np.nan,np.nan,np.nan
if n != np.size(P_received):
print('Error: P_received dimensions do not match gain matrix dimensions\n')
return 'Error: P_received dimensions do not match gain matrix dimensions',np.nan,np.nan,np.nan
if n != np.size(sigma):
print('Error: σ dimensions do not match gain matrix dimensions\n')
return 'Error: σ dimensions do not match gain matrix dimensions',np.nan,np.nan,np.nan
#I = np.zeros((n,m))
#S = np.zeros((n,m))
delta = np.identity(n)
S = G*delta # signal power matrix
I = G-S # interference power matrix
# group matrix: number of groups by number of transmitters
num_groups = int(np.size(Group,0))
if num_groups != np.size(Group_max):
print('Error: Number of groups from Group matrix does not match dimensions of Group_max\n')
return 'Error: Number of groups from Group matrix does not match dimensions of Group_max',np.nan,np.nan,np.nan,np.nan
# normalising the max power of a group so it is in the range [0,1]
Group_norm = Group/np.sum(Group,axis=1).reshape((num_groups,1))
# create scalar optimisation variable p: the power of the n transmitters
p = cvx.Variable(n)
best = np.zeros(n)
# set upper and lower bounds for sub-level set
u = 1e4
l = 0
# alpha defines the sub-level sets of the generalised linear fractional problem
# in this case α is the reciprocal of the minimum SINR
alpha = cvx.Parameter(rows=1,cols=1)
# set up the constraints for the bisection feasibility test
constraints = [I*p + sigma <= alpha*S*p, p <= P_max, p >= 0, G*p <= P_received, Group_norm*p <= Group_max]
# define objective function, in our case it's constant as only want to test the solution's feasibility
obj = cvx.Minimize(alpha)
# now check whether the solution lies between u and l
alpha.value = u
prob = cvx.Problem(obj, constraints)
prob.solve()
if prob.status != 'optimal':
# in this case the level set u is below the solution
print('No optimal solution within bounds\n')
return 'Error: no optimal solution within bounds',np.nan,np.nan,np.nan
alpha.value = l
prob = cvx.Problem(obj, constraints)
prob.solve()
if prob.status == 'optimal':
# in this case the level set l is below the solution
print('No optimal solution within bounds\n')
return 'Error: no optimal solution within bounds',np.nan,np.nan,np.nan
# Bisection algortithm starts
maxLoop = int(1e7)
for i in range(1,maxLoop):
# First check that u is in the feasible domain and l is not, loop finishes here if this is not the case
# set α as the midpoint of the interval
alpha.value = (u + l)/2.0
# test the size of the interval against the specified tolerance
if u-l <= epsilon:
break
# form and solve problem
prob = cvx.Problem(obj, constraints)
prob.solve()
# If the problem is feasible u -> α, if not l -> α, best takes the last feasible value as the optimal one as
# when the tolerance is reached the new α may be out of bounds
if prob.status == 'optimal':
u = alpha.value
best = p.value
else:
l = alpha.value
# final condition to check that the interval has converged to order ε, i.e. the range of the optimal sublevel set is <=ε
if u - l > epsilon and i == (maxLoop-1):
print("Solution not converged to order epsilon")
return l,u,alpha.value,best
###Output
_____no_output_____
###Markdown
ExampleAs a simple example, we will consider a case with $n=5$, where $G_{ij} = 0.6$ if $i=j$ and $0.1$ otherwise. $P_j^{\text{max}} = 1$ for all transmitters and the transmitters are split into two groups, each with $P_l^{\text{gp}} = 1.8$. The first group contains transmitters 1 & 2, while the second group contains 3,4 & 5.For all receivers $P_i^{\text{rc}} = 4$ and $\sigma_i = 0.1$.
###Code
np.set_printoptions(precision=3)
# in this case we will use a gain matrix with a signal weight of 0.6 and interference weight of 0.1
G = np.array([[0.6,0.1,0.1,0.1,0.1],
[0.1,0.6,0.1,0.1,0.1],
[0.1,0.1,0.6,0.1,0.1],
[0.1,0.1,0.1,0.6,0.1],
[0.1,0.1,0.1,0.1,0.6]])
# in this case m=n, but this generalises if we want n receivers and m transmitters
n,m = np.shape(G)
# set maximum power of each transmitter and receiver saturation level
P_max = np.array([1.]*n)
# normalised received power, total possible would be all power from all transmitters so 1/n
P_received = np.array([4.,4.,4.,4.,4.])/n
# set noise level
sigma = np.array([0.1,0.1,0.1,0.1,0.1])
# group matrix: number of groups by number of transmitters
Group = np.array([[1.,1.,0,0,0],[0,0,1.,1.,1.]])
# max normalised power for groups, number of groups by 1
Group_max = np.array([[1.8],[1.8]])
# now run the optimisation problem
l,u,alpha,best=maxmin_sinr(G,P_max,P_received,sigma,Group,Group_max)
print('Minimum SINR=%.4g'%(1/alpha))
print('Power=%s'%(best))
###Output
Minimum SINR=1.142
Power=[[ 0.798]
[ 0.798]
[ 0.798]
[ 0.798]
[ 0.798]]
###Markdown
Power Assignment in a Wireless Communication Systemby Robert Gowers, Roger Hill, Sami Al-Izzi, Timothy Pollington and Keith Briggsfrom Boyd and Vandenberghe, Convex Optimization, exercise 4.20 page 196Convex optimization can be used to maximise the minimum signal to inteference plus noise ratio (SINR) of a wireless communication system. Consider a system with $n$ transmitters, each with power $p_j \geq 0$, transmitting to $n$ receivers. Let $G_{ij} \geq 0$ denote the path gain from transmitter $j$ to receiver $i$. These path gains form the matrix $G \in \mathbb{R}^{n \times n}$.Each receiver is assigned to a transmitter such that the signal power at receiver $i$, $S_i = G_{ii}p_i$ and the interefence power at receiver $i$ is $I_i = \sum_{k\neq i} G_{ik}p_k$. Given a noise power $\sigma_i$ at each receiver, the SINR at receiver $i$, $\gamma_i = \frac{S_i}{I_i + \sigma_i}$.The objective is to maximise the minimum SINR of the system under certain power constraints. These constraints are:i - Each transmitter power $p_j \leq P_j^{\text{max}}$ii - If the transmitters are partitioned into $m$ nonoverlapping groups, $K_1, ..., K_m$, which share a common power supply with total power $P_l^{\text{gp}}$: $\sum_{k\in K_l}p_k \leq P_l^{\text{gp}}$.iii - There is a maximum power that each receiver can receive $P_i^{\text{rc}}$, $\sum_{k=1}^{n}G_{ik}p_k \leq P_i^{\text{rc}}$.The objective function can be rewritten as:minimise $\max_{i=1,...,n}\frac{I_i + \sigma_i}{S_i}$However, since this is a quasiconvex objective function we cannot solve it directly using CVXPY. Instead we must use a bisection method. First we take the step of rewriting the objective, $\alpha = \gamma^{-1} \geq 0$, as a constraint:$I_i+\sigma_i \leq S_i\alpha$Then we choose initial lower and upper bounds $L_0$ and $U_0$ for $\alpha$, which should be chosen such that $L < \alpha^* < U$, where $\alpha^*$ is the optimal value of $\alpha$. Starting with an initial value $\alpha_0 = \frac{1}{2}(L_0+U_0)$, feasibility is checked for $\alpha_0$ by using an arbitrary objective function. The new upper and lower bounds are determined from the feasibility:If $\alpha_0$ is feasible then $L_1 = L_0$, $U_1 = \alpha_0$ and $\alpha_1 = \frac{1}{2}(L_1+U_1)$.If $\alpha_0$ is infeasible then $L_1 = \alpha_1$, $U_1 = U_0$ and $\alpha_1 = \frac{1}{2}(L_1+U_1)$.This bisection process is repeated until $U_N - L_N < \epsilon$, where $\epsilon$ is the desired tolerance.
###Code
#!/usr/bin/env python3
# @author: R. Gowers, S. Al-Izzi, T. Pollington, R. Hill & K. Briggs
import cvxpy as cvx
import numpy as np
def maxmin_sinr(G,P_max,P_received,sigma,Group,Group_max,epsilon = 0.001):
# find n and m from the size of the path gain matrix
n,m = np.shape(G)
# Checks sizes of inputs
if m != np.size(P_max):
print('Error: P_max dimensions do not match gain matrix dimensions\n')
return 'Error: P_max dimensions do not match gain matrix dimensions\n',np.nan,np.nan,np.nan
if n != np.size(P_received):
print('Error: P_received dimensions do not match gain matrix dimensions\n')
return 'Error: P_received dimensions do not match gain matrix dimensions',np.nan,np.nan,np.nan
if n != np.size(sigma):
print('Error: σ dimensions do not match gain matrix dimensions\n')
return 'Error: σ dimensions do not match gain matrix dimensions',np.nan,np.nan,np.nan
#I = np.zeros((n,m))
#S = np.zeros((n,m))
delta = np.identity(n)
S = G*delta # signal power matrix
I = G-S # interference power matrix
# group matrix: number of groups by number of transmitters
num_groups = int(np.size(Group,0))
if num_groups != np.size(Group_max):
print('Error: Number of groups from Group matrix does not match dimensions of Group_max\n')
return 'Error: Number of groups from Group matrix does not match dimensions of Group_max',np.nan,np.nan,np.nan,np.nan
# normalising the max power of a group so it is in the range [0,1]
Group_norm = Group/np.sum(Group,axis=1).reshape((num_groups,1))
# create scalar optimisation variable p: the power of the n transmitters
p = cvx.Variable(shape=(n,1))
best = np.zeros(n)
# set upper and lower bounds for sub-level set
u = 1e4
l = 0
# alpha defines the sub-level sets of the generalised linear fractional problem
# in this case α is the reciprocal of the minimum SINR
alpha = cvx.Parameter(shape=(1, 1))
# set up the constraints for the bisection feasibility test
constraints = [I*p + sigma <= alpha*S*p, p <= P_max, p >= 0, G*p <= P_received, Group_norm*p <= Group_max]
# define objective function, in our case it's constant as only want to test the solution's feasibility
obj = cvx.Minimize(alpha)
# now check whether the solution lies between u and l
alpha.value = u
prob = cvx.Problem(obj, constraints)
prob.solve()
if prob.status != 'optimal':
# in this case the level set u is below the solution
print('No optimal solution within bounds\n')
return 'Error: no optimal solution within bounds',np.nan,np.nan,np.nan
alpha.value = l
prob = cvx.Problem(obj, constraints)
prob.solve()
if prob.status == 'optimal':
# in this case the level set l is below the solution
print('No optimal solution within bounds\n')
return 'Error: no optimal solution within bounds',np.nan,np.nan,np.nan
# Bisection algortithm starts
maxLoop = int(1e7)
for i in range(1,maxLoop):
# First check that u is in the feasible domain and l is not, loop finishes here if this is not the case
# set α as the midpoint of the interval
alpha.value = (u + l)/2.0
# test the size of the interval against the specified tolerance
if u-l <= epsilon:
break
# form and solve problem
prob = cvx.Problem(obj, constraints)
prob.solve()
# If the problem is feasible u -> α, if not l -> α, best takes the last feasible value as the optimal one as
# when the tolerance is reached the new α may be out of bounds
if prob.status == 'optimal':
u = alpha.value
best = p.value
else:
l = alpha.value
# final condition to check that the interval has converged to order ε, i.e. the range of the optimal sublevel set is <=ε
if u - l > epsilon and i == (maxLoop-1):
print("Solution not converged to order epsilon")
return l,u,alpha.value,best
###Output
_____no_output_____
###Markdown
ExampleAs a simple example, we will consider a case with $n=5$, where $G_{ij} = 0.6$ if $i=j$ and $0.1$ otherwise. $P_j^{\text{max}} = 1$ for all transmitters and the transmitters are split into two groups, each with $P_l^{\text{gp}} = 1.8$. The first group contains transmitters 1 & 2, while the second group contains 3,4 & 5.For all receivers $P_i^{\text{rc}} = 4$ and $\sigma_i = 0.1$.
###Code
np.set_printoptions(precision=3)
# in this case we will use a gain matrix with a signal weight of 0.6 and interference weight of 0.1
G = np.array([[0.6,0.1,0.1,0.1,0.1],
[0.1,0.6,0.1,0.1,0.1],
[0.1,0.1,0.6,0.1,0.1],
[0.1,0.1,0.1,0.6,0.1],
[0.1,0.1,0.1,0.1,0.6]])
# in this case m=n, but this generalises if we want n receivers and m transmitters
n,m = np.shape(G)
# set maximum power of each transmitter and receiver saturation level
P_max = np.array([1.]*n)
# normalised received power, total possible would be all power from all transmitters so 1/n
P_received = np.array([4.,4.,4.,4.,4.])/n
# set noise level
sigma = np.array([0.1,0.1,0.1,0.1,0.1])
# group matrix: number of groups by number of transmitters
Group = np.array([[1.,1.,0,0,0],[0,0,1.,1.,1.]])
# max normalised power for groups, number of groups by 1
Group_max = np.array([[1.8],[1.8]])
# now run the optimisation problem
l,u,alpha,best=maxmin_sinr(G,P_max,P_received,sigma,Group,Group_max)
print('Minimum SINR=%.4g'%(1/alpha))
print('Power=%s'%(best))
###Output
Minimum SINR=1.142
Power=[[ 0.798]
[ 0.798]
[ 0.798]
[ 0.798]
[ 0.798]]
|
_notebooks/2020-10-11-ML_interpretability.ipynb | ###Markdown
"Interpretable ML"> "Methods for interpreting ML models"- author: Christopher Thiemann- toc: true- branch: master- badges: true- comments: true- categories: [statistics, ]- hide: false- search_exclude: true
###Code
#hide
import warnings
import numpy as np
import scipy as sp
import sklearn
import statsmodels.api as sm
from statsmodels.formula.api import ols
import matplotlib as mpl
import matplotlib.pyplot as plt
%matplotlib inline
%config InlineBackend.figure_format = 'retina'
import seaborn as sns
sns.set_context("poster")
sns.set(rc={'figure.figsize': (16, 9.)})
sns.set_style("whitegrid")
import pandas as pd
pd.set_option("display.max_rows", 120)
pd.set_option("display.max_columns", 120)
# supress warnings related to r
from rpy2.rinterface import RRuntimeWarning
warnings.filterwarnings('ignore', category= FutureWarning)
warnings.filterwarnings('ignore', category= RRuntimeWarning)
#load the r interface
%load_ext rpy2.ipython
from rpy2.robjects import pandas2ri
pandas2ri.activate()
import rpy2.interactive as r
import rpy2.interactive.packages # this can take few seconds
rlib = r.packages.packages
r.packages.importr("utils")
rlib.utils.install_packages("tidyverse")
rlib.utils.install_packages("GGally")
#hide
# load r packages
%%R
library(tidyverse)
library(GGally)
###Output
_____no_output_____ |
t81_558_class_05_2_kfold.ipynb | ###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
EPOCHS=500
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6814299426511208
Fold #2
Fold score (RMSE): 0.45486513719487165
Fold #3
Fold score (RMSE): 0.571615041876392
Fold #4
Fold score (RMSE): 0.46416356081116916
Fold #5
Fold score (RMSE): 1.0426518491685475
Final, out of sample score (RMSE): 0.678316077597408
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
# Hidden 1
model.add(Dense(50, input_dim=x.shape[1], activation='relu'))
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0, epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6325
Fold #2
Fold score (accuracy): 0.6725
Fold #3
Fold score (accuracy): 0.6975
Fold #4
Fold score (accuracy): 0.6575
Fold #5
Fold score (accuracy): 0.675
Final score (accuracy): 0.667
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the data has been preprocessed, we are ready to build the neural network.
###Code
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 0.544195299216696
Fold #2
Fold score (RMSE): 0.48070599342910353
Fold #3
Fold score (RMSE): 0.7034584765928998
Fold #4
Fold score (RMSE): 0.5397141785190473
Fold #5
Fold score (RMSE): 24.126205213080077
Cross-validated score (RMSE): 10.801732731207947
Holdout score (RMSE): 24.097657947297677
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasYou can use cross-validation for a variety of purposes in predictive modeling:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses several folds and multiple models to provide each data segment a chance to serve as both the validation and training set. Figure 5.CROSS shows cross-validation.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that each fold will have one model (neural network). To generate predictions for new data (not present in the training set), predictions from the fold models can be handled in several ways:* Choose the model with the highest validation score as the final model.* Preset new data to the five models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently concerning cross-validation. Regression is the simpler case where you can break up the data set into K folds with little regard for where each item lands. For regression, the data items should fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation, we will use the Scikit-Learn class **KFold**.Cross-validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as in the original. The balance of classes that a model was trained on must remain the same (or similar) to the training set. Drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This technique is called stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you use classification. In summary, you should use the following two objects in Scikit-Learn:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network of the type trained here would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the **jh-simple-dataset** to predict age. This model is set up as a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out-of-sample predictions. We will assume 500 epochs and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
EPOCHS=500
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6814299426511208
Fold #2
Fold score (RMSE): 0.45486513719487165
Fold #3
Fold score (RMSE): 0.571615041876392
Fold #4
Fold score (RMSE): 0.46416356081116916
Fold #5
Fold score (RMSE): 1.0426518491685475
Final, out of sample score (RMSE): 0.678316077597408
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs required. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the **jh**-simple-dataset dataset with cross-validation to generate out-of-sample. It also writes the out-of-sample (predictions on the test set) results.It is good to perform stratified k-fold cross-validation with classification data. This technique ensures that the percentages of each class remain the same across all folds. Use the **StratifiedKFold** object instead of the **KFold** object used in the regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
# Hidden 1
model.add(Dense(50, input_dim=x.shape[1], activation='relu'))
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0, epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6325
Fold #2
Fold score (accuracy): 0.6725
Fold #3
Fold score (accuracy): 0.6975
Fold #4
Fold score (accuracy): 0.6575
Fold #5
Fold score (accuracy): 0.675
Final score (accuracy): 0.667
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This holdout set will be the final evaluation before using your model for its real-world use. Figure 5. HOLDOUT shows this division.**Figure 5. HOLDOUT: Cross-Validation and a Holdout Set**The following program uses a holdout set and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the data has been preprocessed, we are ready to build the neural network.
###Code
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 0.544195299216696
Fold #2
Fold score (RMSE): 0.48070599342910353
Fold #3
Fold score (RMSE): 0.7034584765928998
Fold #4
Fold score (RMSE): 0.5397141785190473
Fold #5
Fold score (RMSE): 24.126205213080077
Cross-validated score (RMSE): 10.801732731207947
Holdout score (RMSE): 24.097657947297677
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
EPOCHS=500
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6814299426511208
Fold #2
Fold score (RMSE): 0.45486513719487165
Fold #3
Fold score (RMSE): 0.571615041876392
Fold #4
Fold score (RMSE): 0.46416356081116916
Fold #5
Fold score (RMSE): 1.0426518491685475
Final, out of sample score (RMSE): 0.678316077597408
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,\
epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6325
Fold #2
Fold score (accuracy): 0.6725
Fold #3
Fold score (accuracy): 0.6975
Fold #4
Fold score (accuracy): 0.6575
Fold #5
Fold score (accuracy): 0.675
Final score (accuracy): 0.667
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the data has been preprocessed, we are ready to build the neural network.
###Code
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 0.544195299216696
Fold #2
Fold score (RMSE): 0.48070599342910353
Fold #3
Fold score (RMSE): 0.7034584765928998
Fold #4
Fold score (RMSE): 0.5397141785190473
Fold #5
Fold score (RMSE): 24.126205213080077
Cross-validated score (RMSE): 10.801732731207947
Holdout score (RMSE): 24.097657947297677
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6876503822032343
Fold #2
Fold score (RMSE): 0.5095184954014996
Fold #3
Fold score (RMSE): 0.6612122152974527
Fold #4
Fold score (RMSE): 0.45126351507608137
Fold #5
Fold score (RMSE): 1.0501879909703928
Final, out of sample score (RMSE): 0.7037339433871692
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
kf = StratifiedKFold(5, shuffle=True, random_state=42) # Use for StratifiedKFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x,df['product']): # Must specify y StratifiedKFold for
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
pred = np.argmax(pred,axis=1) # raw probabilities to chosen class (highest probability)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use.The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 0.6675838972991742
Fold #2
Fold score (RMSE): 24.318468693736275
Fold #3
Fold score (RMSE): 0.6680420531041584
Fold #4
Fold score (RMSE): 0.6723341303330412
Fold #5
Fold score (RMSE): 0.6576035251447025
Cross-validated score (RMSE): 10.891871681506109
Holdout score (RMSE): 1.1513658069178045
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6245484893737087
Fold #2
Fold score (RMSE): 0.5802295511082306
Fold #3
Fold score (RMSE): 0.6300965769274195
Fold #4
Fold score (RMSE): 0.4550931884841248
Fold #5
Fold score (RMSE): 1.0517027192572377
Final, out of sample score (RMSE): 0.6981314007708873
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6766169154228856
Fold #2
Fold score (accuracy): 0.6691542288557214
Fold #3
Fold score (accuracy): 0.6907730673316709
Fold #4
Fold score (accuracy): 0.6733668341708543
Fold #5
Fold score (accuracy): 0.654911838790932
Final score (accuracy): 0.673
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 24.299626704604506
Fold #2
Fold score (RMSE): 0.6609159891625663
Fold #3
Fold score (RMSE): 0.4997884237817687
Fold #4
Fold score (RMSE): 1.1084218284103058
Fold #5
Fold score (RMSE): 0.614899992174395
Cross-validated score (RMSE): 10.888206072135832
Holdout score (RMSE): 0.6283593821273058
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6245484893737087
Fold #2
Fold score (RMSE): 0.5802295511082306
Fold #3
Fold score (RMSE): 0.6300965769274195
Fold #4
Fold score (RMSE): 0.4550931884841248
Fold #5
Fold score (RMSE): 1.0517027192572377
Final, out of sample score (RMSE): 0.6981314007708873
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
kf = StratifiedKFold(5, shuffle=True, random_state=42) # Use for StratifiedKFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x,df['product']): # Must specify y StratifiedKFold for
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
pred = np.argmax(pred,axis=1) # raw probabilities to chosen class (highest probability)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6766169154228856
Fold #2
Fold score (accuracy): 0.6691542288557214
Fold #3
Fold score (accuracy): 0.6907730673316709
Fold #4
Fold score (accuracy): 0.6733668341708543
Fold #5
Fold score (accuracy): 0.654911838790932
Final score (accuracy): 0.673
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use.The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 24.299626704604506
Fold #2
Fold score (RMSE): 0.6609159891625663
Fold #3
Fold score (RMSE): 0.4997884237817687
Fold #4
Fold score (RMSE): 1.1084218284103058
Fold #5
Fold score (RMSE): 0.614899992174395
Cross-validated score (RMSE): 10.888206072135832
Holdout score (RMSE): 0.6283593821273058
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6245484893737087
Fold #2
Fold score (RMSE): 0.5802295511082306
Fold #3
Fold score (RMSE): 0.6300965769274195
Fold #4
Fold score (RMSE): 0.4550931884841248
Fold #5
Fold score (RMSE): 1.0517027192572377
Final, out of sample score (RMSE): 0.6981314007708873
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6766169154228856
Fold #2
Fold score (accuracy): 0.6691542288557214
Fold #3
Fold score (accuracy): 0.6907730673316709
Fold #4
Fold score (accuracy): 0.6733668341708543
Fold #5
Fold score (accuracy): 0.654911838790932
Final score (accuracy): 0.673
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 24.299626704604506
Fold #2
Fold score (RMSE): 0.6609159891625663
Fold #3
Fold score (RMSE): 0.4997884237817687
Fold #4
Fold score (RMSE): 1.1084218284103058
Fold #5
Fold score (RMSE): 0.614899992174395
Cross-validated score (RMSE): 10.888206072135832
Holdout score (RMSE): 0.6283593821273058
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6245484893737087
Fold #2
Fold score (RMSE): 0.5802295511082306
Fold #3
Fold score (RMSE): 0.6300965769274195
Fold #4
Fold score (RMSE): 0.4550931884841248
Fold #5
Fold score (RMSE): 1.0517027192572377
Final, out of sample score (RMSE): 0.6981314007708873
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,\
epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6766169154228856
Fold #2
Fold score (accuracy): 0.6691542288557214
Fold #3
Fold score (accuracy): 0.6907730673316709
Fold #4
Fold score (accuracy): 0.6733668341708543
Fold #5
Fold score (accuracy): 0.654911838790932
Final score (accuracy): 0.673
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the data has been preprocessed, we are ready to build the neural network.
###Code
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 24.299626704604506
Fold #2
Fold score (RMSE): 0.6609159891625663
Fold #3
Fold score (RMSE): 0.4997884237817687
Fold #4
Fold score (RMSE): 1.1084218284103058
Fold #5
Fold score (RMSE): 0.614899992174395
Cross-validated score (RMSE): 10.888206072135832
Holdout score (RMSE): 0.6283593821273058
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](t81_558_class_05_5_bootstrap.ipynb) Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
_____no_output_____
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
kf = StratifiedKFold(5, shuffle=True, random_state=42) # Use for StratifiedKFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x,df['product']): # Must specify y StratifiedKFold for
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
pred = np.argmax(pred,axis=1) # raw probabilities to chosen class (highest probability)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
_____no_output_____
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use.The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,epochs=500)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
_____no_output_____
###Markdown
T81-558: Applications of Deep Neural Networks**Module 5: Regularization and Dropout*** Instructor: [Jeff Heaton](https://sites.wustl.edu/jeffheaton/), McKelvey School of Engineering, [Washington University in St. Louis](https://engineering.wustl.edu/Programs/Pages/default.aspx)* For more information visit the [class website](https://sites.wustl.edu/jeffheaton/t81-558/). Module 5 Material* Part 5.1: Part 5.1: Introduction to Regularization: Ridge and Lasso [[Video]](https://www.youtube.com/watch?v=jfgRtCYjoBs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_1_reg_ridge_lasso.ipynb)* **Part 5.2: Using K-Fold Cross Validation with Keras** [[Video]](https://www.youtube.com/watch?v=maiQf8ray_s&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_2_kfold.ipynb)* Part 5.3: Using L1 and L2 Regularization with Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=JEWzWv1fBFQ&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_3_keras_l1_l2.ipynb)* Part 5.4: Drop Out for Keras to Decrease Overfitting [[Video]](https://www.youtube.com/watch?v=bRyOi0L6Rs8&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_4_dropout.ipynb)* Part 5.5: Benchmarking Keras Deep Learning Regularization Techniques [[Video]](https://www.youtube.com/watch?v=1NLBwPumUAs&list=PLjy4p-07OYzulelvJ5KVaT2pDlxivl_BN) [[Notebook]](https://github.com/jeffheaton/t81_558_deep_learning/blob/master/t81_558_class_05_5_bootstrap.ipynb) Google CoLab InstructionsThe following code ensures that Google CoLab is running the correct version of TensorFlow.
###Code
import os
os.environ['TF_CPP_MIN_LOG_LEVEL'] = '3'
try:
%tensorflow_version 2.x
COLAB = True
print("Note: using Google CoLab")
except:
print("Note: not using Google CoLab")
COLAB = False
###Output
Note: not using Google CoLab
###Markdown
Part 5.2: Using K-Fold Cross-validation with KerasCross-validation can be used for a variety of purposes in predictive modeling. These include:* Generating out-of-sample predictions from a neural network* Estimate a good number of epochs to train a neural network for (early stopping)* Evaluate the effectiveness of certain hyperparameters, such as activation functions, neuron counts, and layer countsCross-validation uses a number of folds, and multiple models, to provide each segment of data a chance to serve as both the validation and training set. Cross validation is shown in Figure 5.CROSS.**Figure 5.CROSS: K-Fold Crossvalidation**It is important to note that there will be one model (neural network) for each fold. To generate predictions for new data, which is data not present in the training set, predictions from the fold models can be handled in several ways:* Choose the model that had the highest validation score as the final model.* Preset new data to the 5 models (one for each fold) and average the result (this is an [ensemble](https://en.wikipedia.org/wiki/Ensemble_learning)).* Retrain a new model (using the same settings as the cross-validation) on the entire dataset. Train for as many epochs, and with the same hidden layer structure.Generally, I prefer the last approach and will retrain a model on the entire data set once I have selected hyper-parameters. Of course, I will always set aside a final holdout set for model validation that I do not use in any aspect of the training process. Regression vs Classification K-Fold Cross-ValidationRegression and classification are handled somewhat differently with regards to cross-validation. Regression is the simpler case where you can simply break up the data set into K folds with little regard for where each item lands. For regression it is best that the data items fall into the folds as randomly as possible. It is also important to remember that not every fold will necessarily have exactly the same number of data items. It is not always possible for the data set to be evenly divided into K folds. For regression cross-validation we will use the Scikit-Learn class **KFold**.Cross validation for classification could also use the **KFold** object; however, this technique would not ensure that the class balance remains the same in each fold as it was in the original. It is very important that the balance of classes that a model was trained on remains the same (or similar) to the training set. A drift in this distribution is one of the most important things to monitor after a trained model has been placed into actual use. Because of this, we want to make sure that the cross-validation itself does not introduce an unintended shift. This is referred to as stratified sampling and is accomplished by using the Scikit-Learn object **StratifiedKFold** in place of **KFold** whenever you are using classification. In summary, the following two objects in Scikit-Learn should be used:* **KFold** When dealing with a regression problem.* **StratifiedKFold** When dealing with a classification problem.The following two sections demonstrate cross-validation with classification and regression. Out-of-Sample Regression Predictions with K-Fold Cross-ValidationThe following code trains the simple dataset using a 5-fold cross-validation. The expected performance of a neural network, of the type trained here, would be the score for the generated out-of-sample predictions. We begin by preparing a feature vector using the jh-simple-dataset to predict age. This is a regression problem.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the feature vector is created a 5-fold cross-validation can be performed to generate out of sample predictions. We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
EPOCHS=500
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# Cross-Validate
kf = KFold(5, shuffle=True, random_state=42) # Use for KFold classification
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(10, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,
epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure this fold's RMSE
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print(f"Final, out of sample score (RMSE): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (RMSE): 0.6814299426511208
Fold #2
Fold score (RMSE): 0.45486513719487165
Fold #3
Fold score (RMSE): 0.571615041876392
Fold #4
Fold score (RMSE): 0.46416356081116916
Fold #5
Fold score (RMSE): 1.0426518491685475
Final, out of sample score (RMSE): 0.678316077597408
###Markdown
As you can see, the above code also reports the average number of epochs needed. A common technique is to then train on the entire dataset for the average number of epochs needed. Classification with Stratified K-Fold Cross-ValidationThe following code trains and fits the jh-simple-dataset dataset with cross-validation to generate out-of-sample . It also writes out the out of sample (predictions on the test set) results.It is good to perform a stratified k-fold cross validation with classification data. This ensures that the percentages of each class remains the same across all folds. To do this, make use of the **StratifiedKFold** object, instead of the **KFold** object used in regression.
###Code
import pandas as pd
from scipy.stats import zscore
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['age'] = zscore(df['age'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('product').drop('id')
x = df[x_columns].values
dummies = pd.get_dummies(df['product']) # Classification
products = dummies.columns
y = dummies.values
###Output
_____no_output_____
###Markdown
We will assume 500 epochs, and not use early stopping. Later we will see how we can estimate a more optimal epoch count.
###Code
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from sklearn.model_selection import StratifiedKFold
from tensorflow.keras.models import Sequential
from tensorflow.keras.layers import Dense, Activation
# np.argmax(pred,axis=1)
# Cross-validate
# Use for StratifiedKFold classification
kf = StratifiedKFold(5, shuffle=True, random_state=42)
oos_y = []
oos_pred = []
fold = 0
# Must specify y StratifiedKFold for
for train, test in kf.split(x,df['product']):
fold+=1
print(f"Fold #{fold}")
x_train = x[train]
y_train = y[train]
x_test = x[test]
y_test = y[test]
model = Sequential()
model.add(Dense(50, input_dim=x.shape[1], activation='relu')) # Hidden 1
model.add(Dense(25, activation='relu')) # Hidden 2
model.add(Dense(y.shape[1],activation='softmax')) # Output
model.compile(loss='categorical_crossentropy', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),verbose=0,\
epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
# raw probabilities to chosen class (highest probability)
pred = np.argmax(pred,axis=1)
oos_pred.append(pred)
# Measure this fold's accuracy
y_compare = np.argmax(y_test,axis=1) # For accuracy calculation
score = metrics.accuracy_score(y_compare, pred)
print(f"Fold score (accuracy): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
oos_y_compare = np.argmax(oos_y,axis=1) # For accuracy calculation
score = metrics.accuracy_score(oos_y_compare, oos_pred)
print(f"Final score (accuracy): {score}")
# Write the cross-validated prediction
oos_y = pd.DataFrame(oos_y)
oos_pred = pd.DataFrame(oos_pred)
oosDF = pd.concat( [df, oos_y, oos_pred],axis=1 )
#oosDF.to_csv(filename_write,index=False)
###Output
Fold #1
Fold score (accuracy): 0.6325
Fold #2
Fold score (accuracy): 0.6725
Fold #3
Fold score (accuracy): 0.6975
Fold #4
Fold score (accuracy): 0.6575
Fold #5
Fold score (accuracy): 0.675
Final score (accuracy): 0.667
###Markdown
Training with both a Cross-Validation and a Holdout SetIf you have a considerable amount of data, it is always valuable to set aside a holdout set before you cross-validate. This hold out set will be the final evaluation before you make use of your model for its real-world use. Figure 5.HOLDOUT shows this division.**Figure 5.HOLDOUT: Cross Validation and a Holdout Set**The following program makes use of a holdout set, and then still cross-validates.
###Code
import pandas as pd
from scipy.stats import zscore
from sklearn.model_selection import train_test_split
# Read the data set
df = pd.read_csv(
"https://data.heatonresearch.com/data/t81-558/jh-simple-dataset.csv",
na_values=['NA','?'])
# Generate dummies for job
df = pd.concat([df,pd.get_dummies(df['job'],prefix="job")],axis=1)
df.drop('job', axis=1, inplace=True)
# Generate dummies for area
df = pd.concat([df,pd.get_dummies(df['area'],prefix="area")],axis=1)
df.drop('area', axis=1, inplace=True)
# Generate dummies for product
df = pd.concat([df,pd.get_dummies(df['product'],prefix="product")],axis=1)
df.drop('product', axis=1, inplace=True)
# Missing values for income
med = df['income'].median()
df['income'] = df['income'].fillna(med)
# Standardize ranges
df['income'] = zscore(df['income'])
df['aspect'] = zscore(df['aspect'])
df['save_rate'] = zscore(df['save_rate'])
df['subscriptions'] = zscore(df['subscriptions'])
# Convert to numpy - Classification
x_columns = df.columns.drop('age').drop('id')
x = df[x_columns].values
y = df['age'].values
###Output
_____no_output_____
###Markdown
Now that the data has been preprocessed, we are ready to build the neural network.
###Code
from sklearn.model_selection import train_test_split
import pandas as pd
import os
import numpy as np
from sklearn import metrics
from scipy.stats import zscore
from sklearn.model_selection import KFold
# Keep a 10% holdout
x_main, x_holdout, y_main, y_holdout = train_test_split(
x, y, test_size=0.10)
# Cross-validate
kf = KFold(5)
oos_y = []
oos_pred = []
fold = 0
for train, test in kf.split(x_main):
fold+=1
print(f"Fold #{fold}")
x_train = x_main[train]
y_train = y_main[train]
x_test = x_main[test]
y_test = y_main[test]
model = Sequential()
model.add(Dense(20, input_dim=x.shape[1], activation='relu'))
model.add(Dense(5, activation='relu'))
model.add(Dense(1))
model.compile(loss='mean_squared_error', optimizer='adam')
model.fit(x_train,y_train,validation_data=(x_test,y_test),
verbose=0,epochs=EPOCHS)
pred = model.predict(x_test)
oos_y.append(y_test)
oos_pred.append(pred)
# Measure accuracy
score = np.sqrt(metrics.mean_squared_error(pred,y_test))
print(f"Fold score (RMSE): {score}")
# Build the oos prediction list and calculate the error.
oos_y = np.concatenate(oos_y)
oos_pred = np.concatenate(oos_pred)
score = np.sqrt(metrics.mean_squared_error(oos_pred,oos_y))
print()
print(f"Cross-validated score (RMSE): {score}")
# Write the cross-validated prediction (from the last neural network)
holdout_pred = model.predict(x_holdout)
score = np.sqrt(metrics.mean_squared_error(holdout_pred,y_holdout))
print(f"Holdout score (RMSE): {score}")
###Output
Fold #1
Fold score (RMSE): 0.544195299216696
Fold #2
Fold score (RMSE): 0.48070599342910353
Fold #3
Fold score (RMSE): 0.7034584765928998
Fold #4
Fold score (RMSE): 0.5397141785190473
Fold #5
Fold score (RMSE): 24.126205213080077
Cross-validated score (RMSE): 10.801732731207947
Holdout score (RMSE): 24.097657947297677
|
openmdao/docs/openmdao_book/features/core_features/working_with_components/indepvarcomp.ipynb | ###Markdown
IndepVarCompAn *IndepVarComp* is used to define independent variables.Independent variables are those that are set externally to the model—therefore, they are called model inputs. From the perspective of a component, they are component outputs that do not depend on any component inputs. From the perspective of a model, they can be viewed as design variables or model parameters that are set by the user or driver, prior to running the model.In general, you no longer have to define these because OpenMDAO defines and uses them automatically for all unconnected inputs in your model. However, there are some special cases where an IndepVarComp is required (see [Distributed Components](distributed_components.ipynb).The *IndepVarComp* class is instantiated directly (without defining a subclass). The name, initial value, and other options of the independent variable(s) to be declared can be either passed in during instantiation, or declared via the `add_output` method. IndepVarComp Constructor```{eval-rst} .. automethod:: openmdao.core.indepvarcomp.IndepVarComp.__init__ :noindex:``` Method Signature```{eval-rst} .. automethod:: openmdao.core.indepvarcomp.IndepVarComp.add_output :noindex:``` Usage1\. Define one independent variable and set its value.
###Code
"""Define one independent variable and set its value."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var')
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
prob.set_val('indep_var', 2.0)
print(prob.get_val('indep_var'))
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
2\. Define one independent variable with a default value.
###Code
"""Define one independent variable with a default value."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var', val=2.0)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
3\. Define one independent variable with a default value and additional options.
###Code
"""Define one independent variable with a default value and additional options."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var', val=2.0, units='m', lower=0, upper=10)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
4\. Define one independent array variable.
###Code
"""Define one independent array variable."""
import numpy as np
import openmdao.api as om
array = np.array([
[1., 2.],
[3., 4.],
])
comp = om.IndepVarComp('indep_var', val=array)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), array)
###Output
_____no_output_____
###Markdown
5\. Define two independent variables using the `add_output` method with additional options.
###Code
"""Define two independent variables using the add_output method."""
import openmdao.api as om
comp = om.IndepVarComp()
comp.add_output('indep_var_1', val=1.0)
comp.add_output('indep_var_2', val=2.0)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var_1'))
print(prob.get_val('indep_var_2'))
assert_near_equal(prob.get_val('indep_var_1'), 1.0)
assert_near_equal(prob.get_val('indep_var_2'), 2.0)
###Output
_____no_output_____
###Markdown
IndepVarCompAn *IndepVarComp* is used to define independent variables.Independent variables are those that are set externally to the model—therefore, they are called model inputs. From the perspective of a component, they are component outputs that do not depend on any component inputs. From the perspective of a model, they can be viewed as design variables or model parameters that are set by the user or driver, prior to running the model.In general, you no longer have to define these because OpenMDAO defines and uses them automatically for all unconnected inputs in your model. However, there are some special cases where an IndepVarComp is required (see [Distributed Variables](distributed_components.ipynb).)The *IndepVarComp* class is instantiated directly (without defining a subclass). The name, initial value, and other options of the independent variable(s) to be declared can be either passed in during instantiation, or declared via the `add_output` method. IndepVarComp Constructor```{eval-rst} .. automethod:: openmdao.core.indepvarcomp.IndepVarComp.__init__ :noindex:``` Method Signature```{eval-rst} .. automethod:: openmdao.core.indepvarcomp.IndepVarComp.add_output :noindex:``` Usage1\. Define one independent variable and set its value.
###Code
"""Define one independent variable and set its value."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var')
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
prob.set_val('indep_var', 2.0)
print(prob.get_val('indep_var'))
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
2\. Define one independent variable with a default value.
###Code
"""Define one independent variable with a default value."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var', val=2.0)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
3\. Define one independent variable with a default value and additional options.
###Code
"""Define one independent variable with a default value and additional options."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var', val=2.0, units='m', lower=0, upper=10)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
4\. Define one independent array variable.
###Code
"""Define one independent array variable."""
import numpy as np
import openmdao.api as om
array = np.array([
[1., 2.],
[3., 4.],
])
comp = om.IndepVarComp('indep_var', val=array)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), array)
###Output
_____no_output_____
###Markdown
5\. Define two independent variables using the `add_output` method with additional options.
###Code
"""Define two independent variables using the add_output method."""
import openmdao.api as om
comp = om.IndepVarComp()
comp.add_output('indep_var_1', val=1.0)
comp.add_output('indep_var_2', val=2.0)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var_1'))
print(prob.get_val('indep_var_2'))
assert_near_equal(prob.get_val('indep_var_1'), 1.0)
assert_near_equal(prob.get_val('indep_var_2'), 2.0)
###Output
_____no_output_____
###Markdown
IndepVarCompAn *IndepVarComp* is used to define independent variables.Independent variables are those that are set externally to the model—therefore, they are called model inputs. From the perspective of a component, they are component outputs that do not depend on any component inputs. From the perspective of a model, they can be viewed as design variables or model parameters that are set by the user or driver, prior to running the model.In general, you no longer have to define these because OpenMDAO defines and uses them automatically for all unconnected inputs in your model. However, there are some special cases where an IndepVarComp is required (see [Distributed Components](distributed_components.ipynb).The *IndepVarComp* class is instantiated directly (without defining a subclass). The name, initial value, and other options of the independent variable(s) to be declared can be either passed in during instantiation, or declared via the `add_output` method. IndepVarComp Constructor```{eval-rst} .. automethod:: openmdao.core.indepvarcomp.IndepVarComp.__init__ :noindex:``` Method Signature```{eval-rst} .. automethod:: openmdao.core.indepvarcomp.IndepVarComp.add_output :noindex:``` Usage1\. Define one independent variable and set its value.
###Code
"""Define one independent variable and set its value."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var')
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
prob.set_val('indep_var', 2.0)
print(prob.get_val('indep_var'))
from openmdao.utils.assert_utils import assert_near_equal
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
2\. Define one independent variable with a default value.
###Code
"""Define one independent variable with a default value."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var', val=2.0)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
3\. Define one independent variable with a default value and additional options.
###Code
"""Define one independent variable with a default value and additional options."""
import openmdao.api as om
comp = om.IndepVarComp('indep_var', val=2.0, units='m', lower=0, upper=10)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), 2.0)
###Output
_____no_output_____
###Markdown
4\. Define one independent array variable.
###Code
"""Define one independent array variable."""
import numpy as np
import openmdao.api as om
array = np.array([
[1., 2.],
[3., 4.],
])
comp = om.IndepVarComp('indep_var', val=array)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var'))
assert_near_equal(prob.get_val('indep_var'), array)
###Output
_____no_output_____
###Markdown
5\. Define two independent variables using the `add_output` method with additional options.
###Code
"""Define two independent variables using the add_output method."""
import openmdao.api as om
comp = om.IndepVarComp()
comp.add_output('indep_var_1', val=1.0)
comp.add_output('indep_var_2', val=2.0)
prob = om.Problem(comp).setup()
print(prob.get_val('indep_var_1'))
print(prob.get_val('indep_var_2'))
assert_near_equal(prob.get_val('indep_var_1'), 1.0)
assert_near_equal(prob.get_val('indep_var_2'), 2.0)
###Output
_____no_output_____ |
docs/source/Speed_Acceleration_Phasespace_Generation.ipynb | ###Markdown
Speed-Acceleration PhasespaceIn this notebook, we will plot speed-acceleration phase from a drive collection
###Code
from strym import strymread
import strym
import glob
import pandas as pd
import os
import matplotlib.pyplot as plt
import scipy.io as sio
import datetime
import time
import pickle
###Output
/home/ivory/anaconda3/envs/dbn/lib/python3.7/site-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
Set the data folder and dbc file location for Toyota
###Code
parentfolder = "../../PandaData/2020_03_05"
dbcfile = '../examples/newToyotacode.dbc'
csvlist = []
folderlist = glob.glob(parentfolder+"*")
speedlist = []
for datafolder in folderlist:
csvlisttmp = glob.glob(datafolder+"/*.csv")
for f in csvlisttmp:
if "CAN" not in f:
continue
if "_5F" in f:
continue
csvlist.append(f)
print(csvlist)
###Output
['../../PandaData/2020_03_05/2020-03-05-08-23-30-382135__CAN_Messages.csv', '../../PandaData/2020_03_05/2020-03-05-13-21-29-803650__CAN_Messages.csv', '../../PandaData/2020_03_05/2020-03-05-08-42-39-921531__CAN_Messages.csv', '../../PandaData/2020_03_05/2020-03-05-09-09-59-241536__CAN_Messages.csv', '../../PandaData/2020_03_05/2020-03-05-13-00-56-941071__CAN_Messages.csv', '../../PandaData/2020_03_05/2020-03-05-13-59-18-553197__CAN_Messages.csv', '../../PandaData/2020_03_05/2020-03-05-10-11-35-602492__CAN_Messages.csv', '../../PandaData/2020_03_05/2020-03-05-09-21-37-022653__CAN_Messages.csv']
###Markdown
Read all the CSV file
###Code
speed_list = []
accel_list = []
r_list = []
counter = 0
for csv in csvlist:
print("\nReading the CSV file {}".format(csv))
r = strymread(csvfile=csv, dbcfile=dbcfile)
# Don't read the speed if data came in burst, basically filter out files which was recorded with Python
if r.success == True :
if r.burst:
continue
r_list.append(r)
speed = r.speed()
speed['Message'] = speed['Message']*0.277778
accelx = r.accelx()
speed_list.append(speed)
accel_list.append(accelx)
###Output
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-08-23-30-382135__CAN_Messages.csv
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-13-21-29-803650__CAN_Messages.csv
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-08-42-39-921531__CAN_Messages.csv
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-09-09-59-241536__CAN_Messages.csv
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-13-00-56-941071__CAN_Messages.csv
CSVfile is empty.
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-13-59-18-553197__CAN_Messages.csv
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-10-11-35-602492__CAN_Messages.csv
Reading the CSV file ../../PandaData/2020_03_05/2020-03-05-09-21-37-022653__CAN_Messages.csv
###Markdown
Resample Speed to time-points of Accel Data
###Code
resampled_speed_list = []
for i, speed in enumerate(speed_list):
if speed.shape[0] == 0:
continue
speed_new, accel_new = strymread.ts_sync(speed_list[i], accel_list[i], rate="second")
resampled_speed_accel = pd.DataFrame()
resampled_speed_accel['Time'] = speed_new['Time']
resampled_speed_accel['Speed'] = speed_new['Message']
resampled_speed_accel['Accel'] = accel_new['Message']
resampled_speed_list.append(resampled_speed_accel)
###Output
_____no_output_____
###Markdown
Combine all list of resampled dataframe to a single dataframe
###Code
speed_accel = pd.concat(resampled_speed_list)
###Output
_____no_output_____
###Markdown
Make a phase-space plot
###Code
fig, ax = strymread.create_fig(1)
fig.set_size_inches(8, 8)
ax[0].scatter(x = 'Speed', y = 'Accel', data = speed_accel, s = 1, color = "#E32b2b")
ax[0].set_xlabel('Speed [m/s]')
ax[0].set_ylabel('Acceleration [m/s^2]')
ax[0].set_title('Speed[m/s] - Acceleration [m/s^2] Phase-space')
plt.show()
dt_object = datetime.datetime.fromtimestamp(time.time())
dt = dt_object.strftime('%Y-%m-%d-%H-%M-%S-%f')
description = "_2020_03_05_Acceleration_Speed_Data"
fig.savefig(dt+ description + ".pdf", dpi = 300)
fig.savefig(dt+ description + ".png", dpi = 300)
pickle.dump(fig, open(dt+ description +".pickle", 'wb'))
variable_dictionary = {}
variable_dictionary['speed_accel'] = speed_accel.to_numpy()
sio.savemat(dt+"_2020_03_05_Acceleration_Speed_Data.mat", variable_dictionary)
###Output
_____no_output_____ |
01_download_data.ipynb | ###Markdown
Loading and Visualizing Data> David Pinto (*Chief Data Scientist at Nexer Labs*)This notebook shows how to use the Jupyter notebook to download and visualize data.
###Code
# Source url: https://data.seattle.gov/api/views/65db-xm6k/rows.csv?accessType=DOWNLOAD
import pandas as pd
data = pd.read_csv("Fremont_Bridge.csv", index_col="Date", parse_dates=True)
data.head()
# Put all plots in the notebook
%matplotlib inline
# Resample weekly
data.resample("W").sum().plot();
###Output
_____no_output_____ |
notebooks/6.1_openfermion_basics.ipynb | ###Markdown
6-1. OpenFermionの使い方 この節では量子化学計算用のPythonライブラリである、OpenFermion [1] を用いて、相互作用する電子系のハミルトニアンを、量子コンピュータ上で扱いやすい形に変換する方法を紹介する。OpenFermion には量子化学計算のオープンソースライブラリである [Psi4](http://www.psicode.org) および [PySCF](https://github.com/pyscf/pyscf) との接続が用意されており、これらのライブラリの詳細な使い方を理解しなくても、分子の構造を入力するだけで、量子化学計算において現れる電子系のハミルトニアンを得られるようになっている。ここでは PySCF を使用する。
###Code
## Google Colaboratory上で実行する場合バグを回避するためscipyをダウングレード
!pip install scipy==1.2.1
## 各種ライブラリがインストールされていない場合は実行してください
## Google Colaboratory上で実行する場合'You must restart the runtime in order to use newly installed versions.'と出ますが無視してください。
## runtimeを再開するとクラッシュします。
!pip install qulacs pyscf openfermion openfermionpyscf
#必要なライブラリのインポート
from openfermion.hamiltonians import MolecularData
from openfermionpyscf import run_pyscf
from openfermion.transforms import get_fermion_operator, jordan_wigner, bravyi_kitaev
from openfermion.utils import eigenspectrum
from openfermion.transforms import get_sparse_operator
from openfermion.ops import FermionOperator
from pyscf import fci
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
水素分子を計算してみるopenfermion では、分子を記述するデータを MolecularData というクラスに入力する。
###Code
#define constants
basis = "sto-3g" #basis set
multiplicity = 1 #spin multiplicity
charge = 0 #total charge for the molecule
distance = 0.65
geometry = [("H",(0,0,0)),("H", (0,0,distance))] #xyz coordinates for atoms
description = str(distance) #description for the psi4 output file
molecule = MolecularData(geometry, basis, multiplicity, charge, description)
###Output
_____no_output_____
###Markdown
変数の説明以下で上記のコード内で現れている変数の意味を説明する。 basis: 基底関数分子軌道を表現するための基底関数を設定する。sto-3g, 6-31G などいろいろな基底関数系がある。ここで使った sto-3g (Slater Type Orbital - 3 gaussian) は Slater type orbital を 3つのgaussianで近似した基底関数である。Slater type orbital とは、水素原子の解に似せた軌道であり、動径方向の関数として$$R_{nl}(r) = r^{n-l} \exp \left(-\frac{Z-s}{na_0}r\right),$$を使用し、角度方向は球面調和関数$Y_{lm}(\theta,\phi)$を使用するものである。sto-3g では、この動径方向の波動関数$R_{nl}(r)$を、3つのgaussianで近似した関数を用いる。 multiplicity: スピン多重度電子はスピン1/2を持っているので、1つの電子が孤立して存在しているときスピン多重度は2である。しかし水素分子の場合、基底状態では電子はsingletを組み、全体ではスピン0になっていると考えられる。スピン0は1状態のみなので、この場合ではスピン多重度は1とする。 charge: 全電荷全体の電荷を入力する。イオンを考える場合は + になったり − になったりする。 geometry: 原子核配置原子種とその座標を x,y,z で指定する。 descriptionpyscf が計算した出力結果は openfermion のライブラリが保存されているディレクトリ内に保存される。そのファイルの名前を決めるための変数である。 PySCF による計算上記で設定した MolecularData を関数 `run_pyscf` に投げて PySCFによる量子化学計算を行ってみよう。数秒で終わるはずである。
###Code
molecule = run_pyscf(molecule,run_scf=1,run_fci=1)
###Output
_____no_output_____
###Markdown
HF & Full-CI energyPySCF の計算によって求まった Hartree-Fock エネルギーと Full-CI エネルギー (=厳密な基底エネルギー) を見てみよう。(1 Hartree = 27.2116 eV)
###Code
print("HF energy: {} (Hartree)".format(molecule.hf_energy))
print("FCI energy: {} (Hartree)".format(molecule.fci_energy))
###Output
HF energy: -1.1129965456691682 (Hartree)
FCI energy: -1.1299047843229137 (Hartree)
###Markdown
1 電子積分 $h_{ij}$・2電子積分 $h_{ijkl}$1 電子積分や 2 電子積分といった量も MolecularData クラスに保存されている。
###Code
print(molecule.one_body_integrals)
print(molecule.two_body_integrals)
###Output
[[[[ 6.91904405e-01 -1.29088971e-16]
[-1.33947330e-16 1.76318452e-01]]
[[-1.33947330e-16 1.76318452e-01]
[ 6.79683914e-01 -2.19293917e-16]]]
[[[-1.29088971e-16 6.79683914e-01]
[ 1.76318452e-01 -2.28497801e-17]]
[[ 1.76318452e-01 -2.28497801e-17]
[-2.19293917e-16 7.14671111e-01]]]]
###Markdown
第二量子化形式のハミルトニアンopenfermionはこれらの積分値から第二量子化形式のハミルトニアン$$H = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を計算してくれる(第二量子化については、例えば参考文献[2]を参照)。 `get_molecular_hamiltonian` メソッドを呼ぶことでハミルトニアンが計算できる。表示は (3,1)が $c_3^\dagger$, (1,0)が $c_1$ といった具合。
###Code
print(molecule.get_molecular_hamiltonian())
###Output
() 0.8141187860307693
((0, 1), (0, 0)) -1.309509868464871
((1, 1), (1, 0)) -1.309509868464871
((2, 1), (2, 0)) -0.4100263808117837
((3, 1), (3, 0)) -0.4100263808117837
((0, 1), (0, 1), (0, 0), (0, 0)) 0.34595220261490217
((0, 1), (0, 1), (2, 0), (2, 0)) 0.0881592258051036
((0, 1), (1, 1), (1, 0), (0, 0)) 0.34595220261490217
((0, 1), (1, 1), (3, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (0, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (2, 0), (0, 0)) 0.33984195696523056
((0, 1), (3, 1), (1, 0), (2, 0)) 0.0881592258051036
((0, 1), (3, 1), (3, 0), (0, 0)) 0.33984195696523056
((1, 1), (0, 1), (0, 0), (1, 0)) 0.34595220261490217
((1, 1), (0, 1), (2, 0), (3, 0)) 0.0881592258051036
((1, 1), (1, 1), (1, 0), (1, 0)) 0.34595220261490217
((1, 1), (1, 1), (3, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (0, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (2, 0), (1, 0)) 0.33984195696523056
((1, 1), (3, 1), (1, 0), (3, 0)) 0.0881592258051036
((1, 1), (3, 1), (3, 0), (1, 0)) 0.33984195696523056
((2, 1), (0, 1), (0, 0), (2, 0)) 0.3398419569652304
((2, 1), (0, 1), (2, 0), (0, 0)) 0.0881592258051036
((2, 1), (1, 1), (1, 0), (2, 0)) 0.3398419569652304
((2, 1), (1, 1), (3, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (0, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (2, 0), (2, 0)) 0.3573355555190683
((2, 1), (3, 1), (1, 0), (0, 0)) 0.0881592258051036
((2, 1), (3, 1), (3, 0), (2, 0)) 0.3573355555190683
((3, 1), (0, 1), (0, 0), (3, 0)) 0.3398419569652304
((3, 1), (0, 1), (2, 0), (1, 0)) 0.0881592258051036
((3, 1), (1, 1), (1, 0), (3, 0)) 0.3398419569652304
((3, 1), (1, 1), (3, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (0, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (2, 0), (3, 0)) 0.3573355555190683
((3, 1), (3, 1), (1, 0), (1, 0)) 0.0881592258051036
((3, 1), (3, 1), (3, 0), (3, 0)) 0.3573355555190683
###Markdown
量子コンピュータの扱いやすい演算子に変換する量子コンピュータ上で一番扱いやすいのは、 Pauli 演算子 $I, X, Y, Z$ とそのテンソル積である。そこで、普通電子のハミルトニアンを量子コンピュータで扱うには、第二量子化形式のハミルトニアン$$H_{fermion} = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を、$$H_{qubit} = \sum_{P\in \{I,X,Y,Z\}^{\otimes n}} h_{P} P$$の形に変換する。様々な変換方法が提案されているが、ここでは Jordan-Wigner 変換と呼ばれている一番簡単なものを使う。Jordan-Wigner 変換では、分子軌道 $i$ を $i$ 番目の qubit に対応させ、その分子軌道を電子が占有しているという状況を $|1\rangle$, そうでないときには $|0\rangle$ で表すという約束をする。このような約束の下で、fermion の生成消滅演算子の反交換関係$$\{c^\dagger_i, c^\dagger_j\} = c^\dagger_i c^\dagger_j + c^\dagger_j c^\dagger_i = 0, \:\{c_i, c_j\} = 0, \:\{c^\dagger_i, c_j\} = \delta_{ij}$$を満たすようにパウリ演算子を構成すると、$$a^{\dagger}_{j} \leftrightarrow \frac{X_j-iY_j}{2}\otimes Z_{j-1}\otimes Z_{j-2} \cdots Z_{1}$$という対応関係を得る。Jordan-Wigner 変換以外の変換方式については、[2] などを参照されたい。openfermion では Jordan-Wigner 変換が実装されている。`jordan_wigner` 関数に `FermionOperator` を渡すことで、その演算子の Jordan-Wigner 変換に対応する `QubitOperator` を返してくれる。以下では、上で作り出した水素分子の `MolecularData` から `FermionOperator` を作り出し、Jordan-Wigner 変換することで水素分子のハミルトニアンを量子コンピュータの扱いやすい形に変換している。
###Code
jw_hamiltonian = jordan_wigner(get_fermion_operator(molecule.get_molecular_hamiltonian()))
print(jw_hamiltonian)
###Output
(0.03775110394645719+0j) [] +
(-0.04407961290255181+0j) [X0 X1 Y2 Y3] +
(0.04407961290255181+0j) [X0 Y1 Y2 X3] +
(0.04407961290255181+0j) [Y0 X1 X2 Y3] +
(-0.04407961290255181+0j) [Y0 Y1 X2 X3] +
(0.1860164888623058+0j) [Z0] +
(0.17297610130745106+0j) [Z0 Z1] +
(0.12584136558006342+0j) [Z0 Z2] +
(0.16992097848261523+0j) [Z0 Z3] +
(0.18601648886230565+0j) [Z1] +
(0.16992097848261523+0j) [Z1 Z2] +
(0.12584136558006342+0j) [Z1 Z3] +
(-0.26941693141632106+0j) [Z2] +
(0.17866777775953419+0j) [Z2 Z3] +
(-0.26941693141632106+0j) [Z3]
###Markdown
このハミルトニアンから、Hartree-Fock (HF) エネルギーを計算してみよう。Jordan-Wigner 変換では、qubitの$\left|0\right\rangle, \left|1\right\rangle$と軌道の占有数が 1対1 対応していることから、HFエネルギーを計算するには、下から電子数分だけを詰めていった $\left|1100\right\rangle$ に対する期待値をとれば良い。
###Code
#テンソル積を計算するための関数
def kron_N(*ops):
tmp = ops[0]
for op in ops[1:]:
tmp = np.kron(tmp,op)
return tmp
bra0 = np.array([[1,0]])
bra1 = np.array([[0,1]])
HFbra = kron_N(bra1, bra1, bra0, bra0)
HFket = HFbra.T
print(HFbra)
jw_matrix = get_sparse_operator(jw_hamiltonian)
print(np.real(HFbra.dot(jw_matrix.dot(HFket))), molecule.hf_energy)
###Output
[[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0]]
[[-1.11299655]] -1.1129965456691682
###Markdown
pyscf の計算と殆ど一致していることが確認できる。 次にハミルトニアンを対角化して、その結果がFull-CI (厳密解) エネルギーと一致することを確かめてみよう。
###Code
from scipy.sparse.linalg import eigs
eigenenergies, eigenvecs = eigs(jw_matrix)
print(eigenenergies[0], molecule.fci_energy)
###Output
(-1.1299047843229122+8.66058178452165e-18j) -1.1299047843229137
###Markdown
これも殆ど一致していることが確認できる。基底状態の波動関数 $\left|\psi_g\right\rangle$ は
###Code
print(eigenvecs[:,0])
###Output
[ 1.80390379e-16+3.83799405e-19j -1.44425855e-16-2.44300624e-16j
2.11047727e-17-5.26747266e-17j -8.90209542e-02+3.44604214e-02j
-6.87721816e-18+8.01380721e-18j -1.32249539e-16-9.52872766e-17j
1.31086550e-16+2.60415224e-16j -2.50025424e-17+1.94665148e-17j
2.96573140e-17-1.12134985e-19j -5.80783492e-17-1.38210176e-16j
9.90624805e-17-1.03376284e-16j -1.67301469e-17+4.96855838e-17j
9.28307030e-01-3.59351927e-01j -3.98483320e-17+1.69508813e-16j
1.44476911e-16-2.91500912e-16j 9.81968448e-17-4.33266880e-17j]
###Markdown
6-1. OpenFermionの使い方 この節では量子化学計算用のPythonライブラリである、OpenFermion [1] を用いて、相互作用する電子系のハミルトニアンを、量子コンピュータ上で扱いやすい形に変換する方法を紹介する。OpenFermion には量子化学計算のオープンソースライブラリである [Psi4](http://www.psicode.org) および [PySCF](https://github.com/pyscf/pyscf) との接続が用意されており、これらのライブラリの詳細な使い方を理解しなくても、分子の構造を入力するだけで、量子化学計算において現れる電子系のハミルトニアンを得られるようになっている。ここでは PySCF を使用する。
###Code
## 各種ライブラリがインストールされていない場合は実行してください
## Google Colaboratory上で実行する場合'You must restart the runtime in order to use newly installed versions.'と出ますが無視してください。
## runtimeを再開するとクラッシュします。
!pip install qulacs pyscf openfermion openfermionpyscf
## Google Colaboratory / (Linux or Mac)のjupyter notebook 環境の場合にのみ実行してください。
## Qulacsのエラーが正常に出力されるようになります。
!pip3 install wurlitzer
%load_ext wurlitzer
#必要なライブラリのインポート
#エラーが出る場合は openfermion を v.1.0.0 以上にしてみてください
import numpy as np
import matplotlib.pyplot as plt
from openfermion.chem import MolecularData
from openfermion.transforms import get_fermion_operator, jordan_wigner, bravyi_kitaev
from openfermion.linalg import get_sparse_operator
from openfermion.ops import FermionOperator
from openfermionpyscf import run_pyscf
from pyscf import fci
###Output
_____no_output_____
###Markdown
水素分子を計算してみるopenfermion では、分子を記述するデータを MolecularData というクラスに入力する。
###Code
#define constants
basis = "sto-3g" #basis set
multiplicity = 1 #spin multiplicity
charge = 0 #total charge for the molecule
distance = 0.65
geometry = [("H",(0,0,0)),("H", (0,0,distance))] #xyz coordinates for atoms
description = str(distance) #description for the psi4 output file
molecule = MolecularData(geometry, basis, multiplicity, charge, description)
###Output
_____no_output_____
###Markdown
変数の説明以下で上記のコード内で現れている変数の意味を説明する。 basis: 基底関数分子軌道を表現するための基底関数を設定する。sto-3g, 6-31G などいろいろな基底関数系がある。ここで使った sto-3g (Slater Type Orbital - 3 gaussian) は Slater type orbital を 3つのgaussianで近似した基底関数である。Slater type orbital とは、水素原子の解に似せた軌道であり、動径方向の関数として$$R_{nl}(r) = r^{n-l} \exp \left(-\frac{Z-s}{na_0}r\right),$$を使用し、角度方向は球面調和関数$Y_{lm}(\theta,\phi)$を使用するものである。sto-3g では、この動径方向の波動関数$R_{nl}(r)$を、3つのgaussianで近似した関数を用いる。 multiplicity: スピン多重度電子はスピン1/2を持っているので、1つの電子が孤立して存在しているときスピン多重度は2である。しかし水素分子の場合、基底状態では電子はsingletを組み、全体ではスピン0になっていると考えられる。スピン0は1状態のみなので、この場合ではスピン多重度は1とする。 charge: 全電荷全体の電荷を入力する。イオンを考える場合は + になったり − になったりする。 geometry: 原子核配置原子種とその座標を x,y,z で指定する。 descriptionpyscf が計算した出力結果は openfermion のライブラリが保存されているディレクトリ内に保存される。そのファイルの名前を決めるための変数である。 PySCF による計算上記で設定した MolecularData を関数 `run_pyscf` に投げて PySCFによる量子化学計算を行ってみよう。数秒で終わるはずである。
###Code
molecule = run_pyscf(molecule,run_scf=1,run_fci=1)
###Output
_____no_output_____
###Markdown
HF & Full-CI energyPySCF の計算によって求まった Hartree-Fock エネルギーと Full-CI エネルギー (=厳密な基底エネルギー) を見てみよう。(1 Hartree = 27.2116 eV)
###Code
print("HF energy: {} (Hartree)".format(molecule.hf_energy))
print("FCI energy: {} (Hartree)".format(molecule.fci_energy))
###Output
HF energy: -1.1129965456691684 (Hartree)
FCI energy: -1.1299047843229135 (Hartree)
###Markdown
1 電子積分 $h_{ij}$・2電子積分 $h_{ijkl}$1 電子積分や 2 電子積分といった量も MolecularData クラスに保存されている。
###Code
print(molecule.one_body_integrals)
print(molecule.two_body_integrals)
###Output
[[[[ 6.91904405e-01 -4.16333634e-17]
[-2.77555756e-17 1.76318452e-01]]
[[-2.77555756e-17 1.76318452e-01]
[ 6.79683914e-01 0.00000000e+00]]]
[[[-4.16333634e-17 6.79683914e-01]
[ 1.76318452e-01 8.32667268e-17]]
[[ 1.76318452e-01 8.32667268e-17]
[ 0.00000000e+00 7.14671111e-01]]]]
###Markdown
第二量子化形式のハミルトニアンopenfermionはこれらの積分値から第二量子化形式のハミルトニアン$$H = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を計算してくれる(第二量子化については、例えば参考文献[2]を参照)。 `get_molecular_hamiltonian` メソッドを呼ぶことでハミルトニアンが計算できる。表示は (3,1)が $c_3^\dagger$, (1,0)が $c_1$ といった具合。
###Code
print(molecule.get_molecular_hamiltonian())
###Output
() 0.8141187860307693
((0, 1), (0, 0)) -1.309509868464871
((1, 1), (1, 0)) -1.309509868464871
((2, 1), (2, 0)) -0.4100263808117837
((3, 1), (3, 0)) -0.4100263808117837
((0, 1), (0, 1), (0, 0), (0, 0)) 0.3459522026149022
((0, 1), (0, 1), (2, 0), (2, 0)) 0.0881592258051036
((0, 1), (1, 1), (1, 0), (0, 0)) 0.3459522026149022
((0, 1), (1, 1), (3, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (0, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (2, 0), (0, 0)) 0.3398419569652305
((0, 1), (3, 1), (1, 0), (2, 0)) 0.0881592258051036
((0, 1), (3, 1), (3, 0), (0, 0)) 0.3398419569652305
((1, 1), (0, 1), (0, 0), (1, 0)) 0.3459522026149022
((1, 1), (0, 1), (2, 0), (3, 0)) 0.0881592258051036
((1, 1), (1, 1), (1, 0), (1, 0)) 0.3459522026149022
((1, 1), (1, 1), (3, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (0, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (2, 0), (1, 0)) 0.3398419569652305
((1, 1), (3, 1), (1, 0), (3, 0)) 0.0881592258051036
((1, 1), (3, 1), (3, 0), (1, 0)) 0.3398419569652305
((2, 1), (0, 1), (0, 0), (2, 0)) 0.33984195696523034
((2, 1), (0, 1), (2, 0), (0, 0)) 0.0881592258051036
((2, 1), (1, 1), (1, 0), (2, 0)) 0.33984195696523034
((2, 1), (1, 1), (3, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (0, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (2, 0), (2, 0)) 0.35733555551906837
((2, 1), (3, 1), (1, 0), (0, 0)) 0.0881592258051036
((2, 1), (3, 1), (3, 0), (2, 0)) 0.35733555551906837
((3, 1), (0, 1), (0, 0), (3, 0)) 0.33984195696523034
((3, 1), (0, 1), (2, 0), (1, 0)) 0.0881592258051036
((3, 1), (1, 1), (1, 0), (3, 0)) 0.33984195696523034
((3, 1), (1, 1), (3, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (0, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (2, 0), (3, 0)) 0.35733555551906837
((3, 1), (3, 1), (1, 0), (1, 0)) 0.0881592258051036
((3, 1), (3, 1), (3, 0), (3, 0)) 0.35733555551906837
###Markdown
量子コンピュータの扱いやすい演算子に変換する量子コンピュータ上で一番扱いやすいのは、 Pauli 演算子 $I, X, Y, Z$ とそのテンソル積である。そこで、普通電子のハミルトニアンを量子コンピュータで扱うには、第二量子化形式のハミルトニアン$$H_{fermion} = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を、$$H_{qubit} = \sum_{P\in \{I,X,Y,Z\}^{\otimes n}} h_{P} P$$の形に変換する。様々な変換方法が提案されているが、ここでは Jordan-Wigner 変換と呼ばれている一番簡単なものを使う。Jordan-Wigner 変換では、分子軌道 $i$ を $i$ 番目の qubit に対応させ、その分子軌道を電子が占有しているという状況を $|1\rangle$, そうでないときには $|0\rangle$ で表すという約束をする。このような約束の下で、fermion の生成消滅演算子の反交換関係$$\{c^\dagger_i, c^\dagger_j\} = c^\dagger_i c^\dagger_j + c^\dagger_j c^\dagger_i = 0, \:\{c_i, c_j\} = 0, \:\{c^\dagger_i, c_j\} = \delta_{ij}$$を満たすようにパウリ演算子を構成すると、$$a^{\dagger}_{j} \leftrightarrow \frac{X_j-iY_j}{2}\otimes Z_{j-1}\otimes Z_{j-2} \cdots Z_{1}$$という対応関係を得る。Jordan-Wigner 変換以外の変換方式については、[2][3] などを参照されたい。openfermion では Jordan-Wigner 変換が実装されている。`jordan_wigner` 関数に `FermionOperator` を渡すことで、その演算子の Jordan-Wigner 変換に対応する `QubitOperator` を返してくれる。以下では、上で作り出した水素分子の `MolecularData` から `FermionOperator` を作り出し、Jordan-Wigner 変換することで水素分子のハミルトニアンを量子コンピュータの扱いやすい形に変換している。
###Code
jw_hamiltonian = jordan_wigner(get_fermion_operator(molecule.get_molecular_hamiltonian()))
print(jw_hamiltonian)
###Output
(0.0377511039464572+0j) [] +
(-0.0440796129025518+0j) [X0 X1 Y2 Y3] +
(0.0440796129025518+0j) [X0 Y1 Y2 X3] +
(0.0440796129025518+0j) [Y0 X1 X2 Y3] +
(-0.0440796129025518+0j) [Y0 Y1 X2 X3] +
(0.18601648886230573+0j) [Z0] +
(0.1729761013074511+0j) [Z0 Z1] +
(0.12584136558006342+0j) [Z0 Z2] +
(0.16992097848261523+0j) [Z0 Z3] +
(0.18601648886230576+0j) [Z1] +
(0.16992097848261523+0j) [Z1 Z2] +
(0.12584136558006342+0j) [Z1 Z3] +
(-0.26941693141632095+0j) [Z2] +
(0.17866777775953419+0j) [Z2 Z3] +
(-0.26941693141632095+0j) [Z3]
###Markdown
このハミルトニアンから、Hartree-Fock (HF) エネルギーを計算してみよう。Jordan-Wigner 変換では、qubitの$\left|0\right\rangle, \left|1\right\rangle$と軌道の占有数が 1対1 対応していることから、HFエネルギーを計算するには、下から電子数分だけを詰めていった $\left|1100\right\rangle$ に対する期待値をとれば良い。
###Code
#テンソル積を計算するための関数
def kron_N(*ops):
tmp = ops[0]
for op in ops[1:]:
tmp = np.kron(tmp,op)
return tmp
bra0 = np.array([[1,0]])
bra1 = np.array([[0,1]])
HFbra = kron_N(bra1, bra1, bra0, bra0)
HFket = HFbra.T
print(HFbra)
jw_matrix = get_sparse_operator(jw_hamiltonian)
print(np.real(HFbra.dot(jw_matrix.dot(HFket))), molecule.hf_energy)
###Output
[[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0]]
[[-1.11299655]] -1.1129965456691684
###Markdown
pyscf の計算と殆ど一致していることが確認できる。 次にハミルトニアンを対角化して、その結果がFull-CI (厳密解) エネルギーと一致することを確かめてみよう。
###Code
eigenenergies, eigenvecs = np.linalg.eigh(jw_matrix.toarray())
print(eigenenergies[0], molecule.fci_energy)
###Output
-1.129904784322913 -1.1299047843229135
###Markdown
これも殆ど一致していることが確認できる。基底状態の波動関数 $\left|\psi_g\right\rangle$ は
###Code
print(eigenvecs[:,0])
###Output
[ 0. +0.j 0. +0.j 0. +0.j 0.09545811+0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
0. +0.j 0. +0.j 0. +0.j 0. +0.j
-0.99543345+0.j 0. +0.j 0. +0.j 0. +0.j]
###Markdown
6-1. OpenFermionの使い方 この節では量子化学計算用のPythonライブラリである、OpenFermion [1] を用いて、相互作用する電子系のハミルトニアンを、量子コンピュータ上で扱いやすい形に変換する方法を紹介する。OpenFermion には量子化学計算のオープンソースライブラリである [Psi4](http://www.psicode.org) および [PySCF](https://github.com/pyscf/pyscf) との接続が用意されており、これらのライブラリの詳細な使い方を理解しなくても、分子の構造を入力するだけで、量子化学計算において現れる電子系のハミルトニアンを得られるようになっている。ここでは PySCF を使用する。
###Code
## Google Colaboratory上で実行する場合バグを回避するためscipyをダウングレード
!pip install scipy==1.2.1
## 各種ライブラリがインストールされていない場合は実行してください
## Google Colaboratory上で実行する場合'You must restart the runtime in order to use newly installed versions.'と出ますが無視してください。
## runtimeを再開するとクラッシュします。
!pip install qulacs pyscf openfermion openfermionpyscf
#必要なライブラリのインポート
from openfermion.hamiltonians import MolecularData
from openfermionpyscf import run_pyscf
from openfermion.transforms import get_fermion_operator, jordan_wigner, bravyi_kitaev
from openfermion.utils import eigenspectrum
from openfermion.transforms import get_sparse_operator
from openfermion.ops import FermionOperator
from pyscf import fci
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
水素分子を計算してみるopenfermion では、分子を記述するデータを MolecularData というクラスに入力する。
###Code
#define constants
basis = "sto-3g" #basis set
multiplicity = 1 #spin multiplicity
charge = 0 #total charge for the molecule
distance = 0.65
geometry = [("H",(0,0,0)),("H", (0,0,distance))] #xyz coordinates for atoms
description = str(distance) #description for the psi4 output file
molecule = MolecularData(geometry, basis, multiplicity, charge, description)
###Output
_____no_output_____
###Markdown
変数の説明以下で上記のコード内で現れている変数の意味を説明する。 basis: 基底関数分子軌道を表現するための基底関数を設定する。sto-3g, 6-31G などいろいろな基底関数系がある。ここで使った sto-3g (Slater Type Orbital - 3 gaussian) は Slater type orbital を 3つのgaussianで近似した基底関数である。Slater type orbital とは、水素原子の解に似せた軌道であり、動径方向の関数として$$R_{nl}(r) = r^{n-l} \exp \left(-\frac{Z-s}{na_0}r\right),$$を使用し、角度方向は球面調和関数$Y_{lm}(\theta,\phi)$を使用するものである。sto-3g では、この動径方向の波動関数$R_{nl}(r)$を、3つのgaussianで近似した関数を用いる。 multiplicity: スピン多重度電子はスピン1/2を持っているので、1つの電子が孤立して存在しているときスピン多重度は2である。しかし水素分子の場合、基底状態では電子はsingletを組み、全体ではスピン0になっていると考えられる。スピン0は1状態のみなので、この場合ではスピン多重度は1とする。 charge: 全電荷全体の電荷を入力する。イオンを考える場合は + になったり − になったりする。 geometry: 原子核配置原子種とその座標を x,y,z で指定する。 descriptionpyscf が計算した出力結果は openfermion のライブラリが保存されているディレクトリ内に保存される。そのファイルの名前を決めるための変数である。 PySCF による計算上記で設定した MolecularData を関数 `run_pyscf` に投げて PySCFによる量子化学計算を行ってみよう。数秒で終わるはずである。
###Code
molecule = run_pyscf(molecule,run_scf=1,run_fci=1)
###Output
_____no_output_____
###Markdown
HF & Full-CI energyPySCF の計算によって求まった Hartree-Fock エネルギーと Full-CI エネルギー (=厳密な基底エネルギー) を見てみよう。(1 Hartree = 27.2116 eV)
###Code
print("HF energy: {} (Hartree)".format(molecule.hf_energy))
print("FCI energy: {} (Hartree)".format(molecule.fci_energy))
###Output
HF energy: -1.1129965456691682 (Hartree)
FCI energy: -1.1299047843229137 (Hartree)
###Markdown
1 電子積分 $h_{ij}$・2電子積分 $h_{ijkl}$1 電子積分や 2 電子積分といった量も MolecularData クラスに保存されている。
###Code
print(molecule.one_body_integrals)
print(molecule.two_body_integrals)
###Output
[[[[ 6.91904405e-01 -1.29088971e-16]
[-1.33947330e-16 1.76318452e-01]]
[[-1.33947330e-16 1.76318452e-01]
[ 6.79683914e-01 -2.19293917e-16]]]
[[[-1.29088971e-16 6.79683914e-01]
[ 1.76318452e-01 -2.28497801e-17]]
[[ 1.76318452e-01 -2.28497801e-17]
[-2.19293917e-16 7.14671111e-01]]]]
###Markdown
第二量子化形式のハミルトニアンopenfermionはこれらの積分値から第二量子化形式のハミルトニアン$$H = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を計算してくれる(第二量子化については、例えば参考文献[2]を参照)。 `get_molecular_hamiltonian` メソッドを呼ぶことでハミルトニアンが計算できる。表示は (3,1)が $c_3^\dagger$, (1,0)が $c_1$ といった具合。
###Code
print(molecule.get_molecular_hamiltonian())
###Output
() 0.8141187860307693
((0, 1), (0, 0)) -1.309509868464871
((1, 1), (1, 0)) -1.309509868464871
((2, 1), (2, 0)) -0.4100263808117837
((3, 1), (3, 0)) -0.4100263808117837
((0, 1), (0, 1), (0, 0), (0, 0)) 0.34595220261490217
((0, 1), (0, 1), (2, 0), (2, 0)) 0.0881592258051036
((0, 1), (1, 1), (1, 0), (0, 0)) 0.34595220261490217
((0, 1), (1, 1), (3, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (0, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (2, 0), (0, 0)) 0.33984195696523056
((0, 1), (3, 1), (1, 0), (2, 0)) 0.0881592258051036
((0, 1), (3, 1), (3, 0), (0, 0)) 0.33984195696523056
((1, 1), (0, 1), (0, 0), (1, 0)) 0.34595220261490217
((1, 1), (0, 1), (2, 0), (3, 0)) 0.0881592258051036
((1, 1), (1, 1), (1, 0), (1, 0)) 0.34595220261490217
((1, 1), (1, 1), (3, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (0, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (2, 0), (1, 0)) 0.33984195696523056
((1, 1), (3, 1), (1, 0), (3, 0)) 0.0881592258051036
((1, 1), (3, 1), (3, 0), (1, 0)) 0.33984195696523056
((2, 1), (0, 1), (0, 0), (2, 0)) 0.3398419569652304
((2, 1), (0, 1), (2, 0), (0, 0)) 0.0881592258051036
((2, 1), (1, 1), (1, 0), (2, 0)) 0.3398419569652304
((2, 1), (1, 1), (3, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (0, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (2, 0), (2, 0)) 0.3573355555190683
((2, 1), (3, 1), (1, 0), (0, 0)) 0.0881592258051036
((2, 1), (3, 1), (3, 0), (2, 0)) 0.3573355555190683
((3, 1), (0, 1), (0, 0), (3, 0)) 0.3398419569652304
((3, 1), (0, 1), (2, 0), (1, 0)) 0.0881592258051036
((3, 1), (1, 1), (1, 0), (3, 0)) 0.3398419569652304
((3, 1), (1, 1), (3, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (0, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (2, 0), (3, 0)) 0.3573355555190683
((3, 1), (3, 1), (1, 0), (1, 0)) 0.0881592258051036
((3, 1), (3, 1), (3, 0), (3, 0)) 0.3573355555190683
###Markdown
量子コンピュータの扱いやすい演算子に変換する量子コンピュータ上で一番扱いやすいのは、 Pauli 演算子 $I, X, Y, Z$ とそのテンソル積である。そこで、普通電子のハミルトニアンを量子コンピュータで扱うには、第二量子化形式のハミルトニアン$$H_{fermion} = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を、$$H_{qubit} = \sum_{P\in \{I,X,Y,Z\}^{\otimes n}} h_{P} P$$の形に変換する。様々な変換方法が提案されているが、ここでは Jordan-Wigner 変換と呼ばれている一番簡単なものを使う。Jordan-Wigner 変換では、分子軌道 $i$ を $i$ 番目の qubit に対応させ、その分子軌道を電子が占有しているという状況を $|1\rangle$, そうでないときには $|0\rangle$ で表すという約束をする。このような約束の下で、fermion の生成消滅演算子の反交換関係$$\{c^\dagger_i, c^\dagger_j\} = c^\dagger_i c^\dagger_j + c^\dagger_j c^\dagger_i = 0, \:\{c_i, c_j\} = 0, \:\{c^\dagger_i, c_j\} = \delta_{ij}$$を満たすようにパウリ演算子を構成すると、$$a^{\dagger}_{j} \leftrightarrow \frac{X_j-iY_j}{2}\otimes Z_{j-1}\otimes Z_{j-2} \cdots Z_{1}$$という対応関係を得る。Jordan-Wigner 変換以外の変換方式については、[2][3] などを参照されたい。openfermion では Jordan-Wigner 変換が実装されている。`jordan_wigner` 関数に `FermionOperator` を渡すことで、その演算子の Jordan-Wigner 変換に対応する `QubitOperator` を返してくれる。以下では、上で作り出した水素分子の `MolecularData` から `FermionOperator` を作り出し、Jordan-Wigner 変換することで水素分子のハミルトニアンを量子コンピュータの扱いやすい形に変換している。
###Code
jw_hamiltonian = jordan_wigner(get_fermion_operator(molecule.get_molecular_hamiltonian()))
print(jw_hamiltonian)
###Output
(0.03775110394645719+0j) [] +
(-0.04407961290255181+0j) [X0 X1 Y2 Y3] +
(0.04407961290255181+0j) [X0 Y1 Y2 X3] +
(0.04407961290255181+0j) [Y0 X1 X2 Y3] +
(-0.04407961290255181+0j) [Y0 Y1 X2 X3] +
(0.1860164888623058+0j) [Z0] +
(0.17297610130745106+0j) [Z0 Z1] +
(0.12584136558006342+0j) [Z0 Z2] +
(0.16992097848261523+0j) [Z0 Z3] +
(0.18601648886230565+0j) [Z1] +
(0.16992097848261523+0j) [Z1 Z2] +
(0.12584136558006342+0j) [Z1 Z3] +
(-0.26941693141632106+0j) [Z2] +
(0.17866777775953419+0j) [Z2 Z3] +
(-0.26941693141632106+0j) [Z3]
###Markdown
このハミルトニアンから、Hartree-Fock (HF) エネルギーを計算してみよう。Jordan-Wigner 変換では、qubitの$\left|0\right\rangle, \left|1\right\rangle$と軌道の占有数が 1対1 対応していることから、HFエネルギーを計算するには、下から電子数分だけを詰めていった $\left|1100\right\rangle$ に対する期待値をとれば良い。
###Code
#テンソル積を計算するための関数
def kron_N(*ops):
tmp = ops[0]
for op in ops[1:]:
tmp = np.kron(tmp,op)
return tmp
bra0 = np.array([[1,0]])
bra1 = np.array([[0,1]])
HFbra = kron_N(bra1, bra1, bra0, bra0)
HFket = HFbra.T
print(HFbra)
jw_matrix = get_sparse_operator(jw_hamiltonian)
print(np.real(HFbra.dot(jw_matrix.dot(HFket))), molecule.hf_energy)
###Output
[[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0]]
[[-1.11299655]] -1.1129965456691682
###Markdown
pyscf の計算と殆ど一致していることが確認できる。 次にハミルトニアンを対角化して、その結果がFull-CI (厳密解) エネルギーと一致することを確かめてみよう。
###Code
from scipy.sparse.linalg import eigs
eigenenergies, eigenvecs = eigs(jw_matrix)
print(eigenenergies[0], molecule.fci_energy)
###Output
(-1.1299047843229122+8.66058178452165e-18j) -1.1299047843229137
###Markdown
これも殆ど一致していることが確認できる。基底状態の波動関数 $\left|\psi_g\right\rangle$ は
###Code
print(eigenvecs[:,0])
###Output
[ 1.80390379e-16+3.83799405e-19j -1.44425855e-16-2.44300624e-16j
2.11047727e-17-5.26747266e-17j -8.90209542e-02+3.44604214e-02j
-6.87721816e-18+8.01380721e-18j -1.32249539e-16-9.52872766e-17j
1.31086550e-16+2.60415224e-16j -2.50025424e-17+1.94665148e-17j
2.96573140e-17-1.12134985e-19j -5.80783492e-17-1.38210176e-16j
9.90624805e-17-1.03376284e-16j -1.67301469e-17+4.96855838e-17j
9.28307030e-01-3.59351927e-01j -3.98483320e-17+1.69508813e-16j
1.44476911e-16-2.91500912e-16j 9.81968448e-17-4.33266880e-17j]
###Markdown
6-1. OpenFermionの使い方 この節では量子化学計算用のPythonライブラリである、OpenFermion [1] を用いて、相互作用する電子系のハミルトニアンを、量子コンピュータ上で扱いやすい形に変換する方法を紹介する。OpenFermion には量子化学計算のオープンソースライブラリである [Psi4](http://www.psicode.org) および [PySCF](https://github.com/pyscf/pyscf) との接続が用意されており、これらのライブラリの詳細な使い方を理解しなくても、分子の構造を入力するだけで、量子化学計算において現れる電子系のハミルトニアンを得られるようになっている。ここでは PySCF を使用する。
###Code
## Google Colaboratory上で実行する場合バグを回避するためscipyをダウングレード
!pip install scipy==1.2.1
## 各種ライブラリがインストールされていない場合は実行してください
## Google Colaboratory上で実行する場合'You must restart the runtime in order to use newly installed versions.'と出ますが無視してください。
## runtimeを再開するとクラッシュします。
!pip install qulacs pyscf openfermion openfermionpyscf
## Google Colaboratory / (Linux or Mac)のjupyter notebook 環境の場合にのみ実行してください。
## Qulacsのエラーが正常に出力されるようになります。
!pip3 install wurlitzer
%load_ext wurlitzer
#必要なライブラリのインポート
from openfermion.hamiltonians import MolecularData
from openfermionpyscf import run_pyscf
from openfermion.transforms import get_fermion_operator, jordan_wigner, bravyi_kitaev
from openfermion.utils import eigenspectrum
from openfermion.transforms import get_sparse_operator
from openfermion.ops import FermionOperator
from pyscf import fci
import numpy as np
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
水素分子を計算してみるopenfermion では、分子を記述するデータを MolecularData というクラスに入力する。
###Code
#define constants
basis = "sto-3g" #basis set
multiplicity = 1 #spin multiplicity
charge = 0 #total charge for the molecule
distance = 0.65
geometry = [("H",(0,0,0)),("H", (0,0,distance))] #xyz coordinates for atoms
description = str(distance) #description for the psi4 output file
molecule = MolecularData(geometry, basis, multiplicity, charge, description)
###Output
_____no_output_____
###Markdown
変数の説明以下で上記のコード内で現れている変数の意味を説明する。 basis: 基底関数分子軌道を表現するための基底関数を設定する。sto-3g, 6-31G などいろいろな基底関数系がある。ここで使った sto-3g (Slater Type Orbital - 3 gaussian) は Slater type orbital を 3つのgaussianで近似した基底関数である。Slater type orbital とは、水素原子の解に似せた軌道であり、動径方向の関数として$$R_{nl}(r) = r^{n-l} \exp \left(-\frac{Z-s}{na_0}r\right),$$を使用し、角度方向は球面調和関数$Y_{lm}(\theta,\phi)$を使用するものである。sto-3g では、この動径方向の波動関数$R_{nl}(r)$を、3つのgaussianで近似した関数を用いる。 multiplicity: スピン多重度電子はスピン1/2を持っているので、1つの電子が孤立して存在しているときスピン多重度は2である。しかし水素分子の場合、基底状態では電子はsingletを組み、全体ではスピン0になっていると考えられる。スピン0は1状態のみなので、この場合ではスピン多重度は1とする。 charge: 全電荷全体の電荷を入力する。イオンを考える場合は + になったり − になったりする。 geometry: 原子核配置原子種とその座標を x,y,z で指定する。 descriptionpyscf が計算した出力結果は openfermion のライブラリが保存されているディレクトリ内に保存される。そのファイルの名前を決めるための変数である。 PySCF による計算上記で設定した MolecularData を関数 `run_pyscf` に投げて PySCFによる量子化学計算を行ってみよう。数秒で終わるはずである。
###Code
molecule = run_pyscf(molecule,run_scf=1,run_fci=1)
###Output
_____no_output_____
###Markdown
HF & Full-CI energyPySCF の計算によって求まった Hartree-Fock エネルギーと Full-CI エネルギー (=厳密な基底エネルギー) を見てみよう。(1 Hartree = 27.2116 eV)
###Code
print("HF energy: {} (Hartree)".format(molecule.hf_energy))
print("FCI energy: {} (Hartree)".format(molecule.fci_energy))
###Output
HF energy: -1.1129965456691682 (Hartree)
FCI energy: -1.1299047843229137 (Hartree)
###Markdown
1 電子積分 $h_{ij}$・2電子積分 $h_{ijkl}$1 電子積分や 2 電子積分といった量も MolecularData クラスに保存されている。
###Code
print(molecule.one_body_integrals)
print(molecule.two_body_integrals)
###Output
[[[[ 6.91904405e-01 -1.29088971e-16]
[-1.33947330e-16 1.76318452e-01]]
[[-1.33947330e-16 1.76318452e-01]
[ 6.79683914e-01 -2.19293917e-16]]]
[[[-1.29088971e-16 6.79683914e-01]
[ 1.76318452e-01 -2.28497801e-17]]
[[ 1.76318452e-01 -2.28497801e-17]
[-2.19293917e-16 7.14671111e-01]]]]
###Markdown
第二量子化形式のハミルトニアンopenfermionはこれらの積分値から第二量子化形式のハミルトニアン$$H = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を計算してくれる(第二量子化については、例えば参考文献[2]を参照)。 `get_molecular_hamiltonian` メソッドを呼ぶことでハミルトニアンが計算できる。表示は (3,1)が $c_3^\dagger$, (1,0)が $c_1$ といった具合。
###Code
print(molecule.get_molecular_hamiltonian())
###Output
() 0.8141187860307693
((0, 1), (0, 0)) -1.309509868464871
((1, 1), (1, 0)) -1.309509868464871
((2, 1), (2, 0)) -0.4100263808117837
((3, 1), (3, 0)) -0.4100263808117837
((0, 1), (0, 1), (0, 0), (0, 0)) 0.34595220261490217
((0, 1), (0, 1), (2, 0), (2, 0)) 0.0881592258051036
((0, 1), (1, 1), (1, 0), (0, 0)) 0.34595220261490217
((0, 1), (1, 1), (3, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (0, 0), (2, 0)) 0.0881592258051036
((0, 1), (2, 1), (2, 0), (0, 0)) 0.33984195696523056
((0, 1), (3, 1), (1, 0), (2, 0)) 0.0881592258051036
((0, 1), (3, 1), (3, 0), (0, 0)) 0.33984195696523056
((1, 1), (0, 1), (0, 0), (1, 0)) 0.34595220261490217
((1, 1), (0, 1), (2, 0), (3, 0)) 0.0881592258051036
((1, 1), (1, 1), (1, 0), (1, 0)) 0.34595220261490217
((1, 1), (1, 1), (3, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (0, 0), (3, 0)) 0.0881592258051036
((1, 1), (2, 1), (2, 0), (1, 0)) 0.33984195696523056
((1, 1), (3, 1), (1, 0), (3, 0)) 0.0881592258051036
((1, 1), (3, 1), (3, 0), (1, 0)) 0.33984195696523056
((2, 1), (0, 1), (0, 0), (2, 0)) 0.3398419569652304
((2, 1), (0, 1), (2, 0), (0, 0)) 0.0881592258051036
((2, 1), (1, 1), (1, 0), (2, 0)) 0.3398419569652304
((2, 1), (1, 1), (3, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (0, 0), (0, 0)) 0.0881592258051036
((2, 1), (2, 1), (2, 0), (2, 0)) 0.3573355555190683
((2, 1), (3, 1), (1, 0), (0, 0)) 0.0881592258051036
((2, 1), (3, 1), (3, 0), (2, 0)) 0.3573355555190683
((3, 1), (0, 1), (0, 0), (3, 0)) 0.3398419569652304
((3, 1), (0, 1), (2, 0), (1, 0)) 0.0881592258051036
((3, 1), (1, 1), (1, 0), (3, 0)) 0.3398419569652304
((3, 1), (1, 1), (3, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (0, 0), (1, 0)) 0.0881592258051036
((3, 1), (2, 1), (2, 0), (3, 0)) 0.3573355555190683
((3, 1), (3, 1), (1, 0), (1, 0)) 0.0881592258051036
((3, 1), (3, 1), (3, 0), (3, 0)) 0.3573355555190683
###Markdown
量子コンピュータの扱いやすい演算子に変換する量子コンピュータ上で一番扱いやすいのは、 Pauli 演算子 $I, X, Y, Z$ とそのテンソル積である。そこで、普通電子のハミルトニアンを量子コンピュータで扱うには、第二量子化形式のハミルトニアン$$H_{fermion} = \sum_i h_{ij}c_i^\dagger c_j + \sum_{ijkl} h_{ijkl} c_i^\dagger c_j^\dagger c_k c_l$$を、$$H_{qubit} = \sum_{P\in \{I,X,Y,Z\}^{\otimes n}} h_{P} P$$の形に変換する。様々な変換方法が提案されているが、ここでは Jordan-Wigner 変換と呼ばれている一番簡単なものを使う。Jordan-Wigner 変換では、分子軌道 $i$ を $i$ 番目の qubit に対応させ、その分子軌道を電子が占有しているという状況を $|1\rangle$, そうでないときには $|0\rangle$ で表すという約束をする。このような約束の下で、fermion の生成消滅演算子の反交換関係$$\{c^\dagger_i, c^\dagger_j\} = c^\dagger_i c^\dagger_j + c^\dagger_j c^\dagger_i = 0, \:\{c_i, c_j\} = 0, \:\{c^\dagger_i, c_j\} = \delta_{ij}$$を満たすようにパウリ演算子を構成すると、$$a^{\dagger}_{j} \leftrightarrow \frac{X_j-iY_j}{2}\otimes Z_{j-1}\otimes Z_{j-2} \cdots Z_{1}$$という対応関係を得る。Jordan-Wigner 変換以外の変換方式については、[2][3] などを参照されたい。openfermion では Jordan-Wigner 変換が実装されている。`jordan_wigner` 関数に `FermionOperator` を渡すことで、その演算子の Jordan-Wigner 変換に対応する `QubitOperator` を返してくれる。以下では、上で作り出した水素分子の `MolecularData` から `FermionOperator` を作り出し、Jordan-Wigner 変換することで水素分子のハミルトニアンを量子コンピュータの扱いやすい形に変換している。
###Code
jw_hamiltonian = jordan_wigner(get_fermion_operator(molecule.get_molecular_hamiltonian()))
print(jw_hamiltonian)
###Output
(0.03775110394645719+0j) [] +
(-0.04407961290255181+0j) [X0 X1 Y2 Y3] +
(0.04407961290255181+0j) [X0 Y1 Y2 X3] +
(0.04407961290255181+0j) [Y0 X1 X2 Y3] +
(-0.04407961290255181+0j) [Y0 Y1 X2 X3] +
(0.1860164888623058+0j) [Z0] +
(0.17297610130745106+0j) [Z0 Z1] +
(0.12584136558006342+0j) [Z0 Z2] +
(0.16992097848261523+0j) [Z0 Z3] +
(0.18601648886230565+0j) [Z1] +
(0.16992097848261523+0j) [Z1 Z2] +
(0.12584136558006342+0j) [Z1 Z3] +
(-0.26941693141632106+0j) [Z2] +
(0.17866777775953419+0j) [Z2 Z3] +
(-0.26941693141632106+0j) [Z3]
###Markdown
このハミルトニアンから、Hartree-Fock (HF) エネルギーを計算してみよう。Jordan-Wigner 変換では、qubitの$\left|0\right\rangle, \left|1\right\rangle$と軌道の占有数が 1対1 対応していることから、HFエネルギーを計算するには、下から電子数分だけを詰めていった $\left|1100\right\rangle$ に対する期待値をとれば良い。
###Code
#テンソル積を計算するための関数
def kron_N(*ops):
tmp = ops[0]
for op in ops[1:]:
tmp = np.kron(tmp,op)
return tmp
bra0 = np.array([[1,0]])
bra1 = np.array([[0,1]])
HFbra = kron_N(bra1, bra1, bra0, bra0)
HFket = HFbra.T
print(HFbra)
jw_matrix = get_sparse_operator(jw_hamiltonian)
print(np.real(HFbra.dot(jw_matrix.dot(HFket))), molecule.hf_energy)
###Output
[[0 0 0 0 0 0 0 0 0 0 0 0 1 0 0 0]]
[[-1.11299655]] -1.1129965456691682
###Markdown
pyscf の計算と殆ど一致していることが確認できる。 次にハミルトニアンを対角化して、その結果がFull-CI (厳密解) エネルギーと一致することを確かめてみよう。
###Code
from scipy.sparse.linalg import eigs
eigenenergies, eigenvecs = eigs(jw_matrix)
print(eigenenergies[0], molecule.fci_energy)
###Output
(-1.1299047843229122+8.66058178452165e-18j) -1.1299047843229137
###Markdown
これも殆ど一致していることが確認できる。基底状態の波動関数 $\left|\psi_g\right\rangle$ は
###Code
print(eigenvecs[:,0])
###Output
[ 1.80390379e-16+3.83799405e-19j -1.44425855e-16-2.44300624e-16j
2.11047727e-17-5.26747266e-17j -8.90209542e-02+3.44604214e-02j
-6.87721816e-18+8.01380721e-18j -1.32249539e-16-9.52872766e-17j
1.31086550e-16+2.60415224e-16j -2.50025424e-17+1.94665148e-17j
2.96573140e-17-1.12134985e-19j -5.80783492e-17-1.38210176e-16j
9.90624805e-17-1.03376284e-16j -1.67301469e-17+4.96855838e-17j
9.28307030e-01-3.59351927e-01j -3.98483320e-17+1.69508813e-16j
1.44476911e-16-2.91500912e-16j 9.81968448e-17-4.33266880e-17j]
|
src/notebooks/pre_process.ipynb | ###Markdown
Pre-processingReads (currently) muscle and thryoid expression matrices, cleans them, train-test split (80-20), grabs top-1000 genes (univariate correlations in training) with label (age)
###Code
import numpy as np
import pandas as pd
import sys
import argparse
import matplotlib.pyplot as plt
%matplotlib inline
plt.style.use('ggplot')
from sklearn import linear_model
from sklearn.model_selection import train_test_split
import os
from sklearn.feature_selection import SelectKBest, f_regression, VarianceThreshold
from sklearn.metrics import mean_squared_error
import math
cur_path = os.getcwd()
cur_path += "/../.."
parser = argparse.ArgumentParser(description='Process display arguments')
parser.add_argument("-f", "--jupyter-json")
parser.add_argument("-muscle-file", "--muscle-file", default=cur_path+"/data/GTEx_Analysis_v7_eQTL_expression_matrices/Muscle_Skeletal.v7.normalized_expression.bed")
parser.add_argument("-thyroid-file", "--thyroid-file", default=cur_path+"/data/GTEx_Analysis_v7_eQTL_expression_matrices/Thyroid.v7.normalized_expression.bed")
parser.add_argument("-label-file", "--label-file", default=cur_path+"/data/GTEx_v7_Annotations_SubjectPhenotypesDS.txt")
parser.add_argument("-output-dir", "--output-dir", default=cur_path+"/processed_data")
parser.add_argument("-phenotypes-file", "--phenotypes-file", default=cur_path+"/data/GTEx_v7_Annotations_SubjectPhenotypesDS.txt")
args = parser.parse_args()
if not os.path.exists(args.output_dir):
os.mkdir(args.output_dir)
def gen_matrix(file_path):
df = pd.read_csv(file_path, header=0, sep='\t', dtype=str)
df.drop(["#chr", "start", "end"], axis=1, inplace=True)
df.set_index("gene_id", inplace=True)
df = df.transpose()
df.columns.name = None
return df.apply(pd.to_numeric)
muscle_df = gen_matrix(args.muscle_file)
thyroid_df = gen_matrix(args.thyroid_file)
inter_instances = list(set(muscle_df.index) & set(thyroid_df.index))
inter_features = list(set(muscle_df.columns) & set(thyroid_df.columns))
muscle_df = muscle_df[inter_features].loc[inter_instances]
thyroid_df = thyroid_df[inter_features].loc[inter_instances]
train_muscle, test_muscle = train_test_split(muscle_df, test_size=0.2, shuffle=False)
train_thyroid, test_thyroid = train_test_split(thyroid_df, test_size=0.2, shuffle=False)
labels = pd.read_csv(args.phenotypes_file, header=0, sep='\t', dtype=str)
labels = labels.set_index("SUBJID").drop(["SEX", "DTHHRDY"], axis=1)
labels.index.name = None
labels["AGE"] = labels["AGE"].apply(lambda x: int(x[0:1]))
labels = labels.loc[inter_instances]
def select_features(train_x, train_y):
selector = SelectKBest(f_regression, k=517)
selector.fit(train_x, train_y.values.ravel())
col_indices = selector.get_support(indices=True)
return col_indices
muscle_features = select_features(train_muscle, labels.loc[train_muscle.index])
thyroid_features = select_features(train_thyroid, labels.loc[train_thyroid.index])
f_features = list(set(muscle_features) | set(thyroid_features))
muscle_df[muscle_df.columns[f_features]].to_csv(args.output_dir + "/full_muscle.csv")
train_muscle[train_muscle.columns[f_features]].to_csv(args.output_dir + "/train_muscle.csv")
test_muscle[test_muscle.columns[f_features]].to_csv(args.output_dir + "/test_muscle.csv")
thyroid_df[thyroid_df.columns[f_features]].to_csv(args.output_dir + "/full_thyroid.csv")
train_thyroid[train_thyroid.columns[f_features]].to_csv(args.output_dir + "/train_thyroid.csv")
test_thyroid[test_thyroid.columns[f_features]].to_csv(args.output_dir + "/test_thyroid.csv")
labels.to_csv(args.output_dir + "/labels.csv")
###Output
_____no_output_____ |
content/notebook/intro_to_python.ipynb | ###Markdown
https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks
###Code
for 索引 in range(6):
print(索引)
from IPython.lib import passwd
passw
字串資料 = "這是一個字串資料"
print("字串資料的長度為:", len(字串資料))
print("-"*30)
位址 = 0
for 索引 in 字串資料:
print(索引)
print("--- 以上為列印索引 ---")
print(字串資料[位址])
print("--- 以上為列印字串資料數列 ---")
位址 = 位址 + 1
# 本程式在 demo
# print()
# def
# range()
# list()
# list 內容變化
# reversed()
# zip()
# if elif else
def 列印星號(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號2(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號3(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2-1):
if 數 <= 索引[0] and 數 >= 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號4(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2+1):
if 數 >= 索引[0] and 數 <= 索引[1]:
print("2", end="")
else:
print("1", end="")
print()
def 列印菱形(輸入變數):
列印星號3(輸入變數)
列印星號4(輸入變數-1)
列印菱形(11)
# 請將本程式改為菱形中間也要列印星號
def 增量列印(行數):
# 基本的 print() 函式列印
# 只要將 4 改為 11, 就可以重複列印 10 行
#for 列印行數 in range(1, 4):
for 列印行數 in range(1, 行數+1):
print("Welcome to Python3 ", end="")
for 列印個數 in range(列印行數):
print("*", end="")
# 完成多星號列印後, 必須額外要求跳行, 否則各行會列在同一行
print()
增量列印(10)
def 菱形(n):
數列1 = [x+n for x in range(0, n)]
數列2 = list(range(n, 0, -1))
數列3 = zip(數列1, 數列2)
for i in 數列3:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:https://120.113.76.47:8888/notebooks/intro_to_python.ipynb#
print(" ", end="")
print()
數列4 = [x for x in range(2, n+1)]
數列5 = [x+n-2 for x in range(n, 0, -1)]
數列6 = zip(數列4, 數列5)
for i in 數列6:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:
print(" ", end="")
print()
n = 20
菱形(n)
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def bohachevsky_arg0(sol):
return benchmarks.bohachevsky(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 50)
# ax = Axes3D(fig)
X = np.arange(-15, 15, 0.5)
Y = np.arange(-15, 15, 0.5)
X, Y = np.meshgrid(X, Y)
Z = np.zeros(X.shape)
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Z[i,j] = bohachevsky_arg0((X[i,j],Y[i,j]))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def untuple(sol):
return benchmarks.himmelblau(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 49)
X = np.arange(-6, 6, 0.1)
Y = np.arange(-6, 6, 0.1)
X, Y = np.meshgrid(X, Y)
Z = np.array(list(map(untuple, zip(X,Y))))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
from NewTurtle import Turtle
t=Turtle()
t.forward(100)
###Output
WARNING: ip.register_post_execute is deprecated, use ip.events.register('post_run_cell', func) instead.
###Markdown
https://github.com/ipython/ipython/wiki/A-gallery-of-interesting-IPython-Notebooks
###Code
for 索引 in range(6):
print(索引)
from IPython.lib import passwd
passw
字串資料 = "這是一個字串資料"
print("字串資料的長度為:", len(字串資料))
print("-"*30)
位址 = 0
for 索引 in 字串資料:
print(索引)
print("--- 以上為列印索引 ---")
print(字串資料[位址])
print("--- 以上為列印字串資料數列 ---")
位址 = 位址 + 1
# 本程式在 demo
# print()
# def
# range()
# list()
# list 內容變化
# reversed()
# zip()
# if elif else
def 列印星號(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號2(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2):
if 數 == 索引[0] or 數 == 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號3(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數-1 for x in 數列1]
反數列2 = reversed(數列1)
集合 = zip(數列2, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2-1):
if 數 <= 索引[0] and 數 >= 索引[1]:
print("*", end="")
else:
print(" ", end="")
print()
def 列印星號4(輸入變數):
數列1 = list(range(輸入變數))
數列2 = [x+輸入變數 for x in 數列1]
數列3 = [x+1 for x in 數列1]
反數列2 = reversed(數列2)
集合 = zip(數列3, 反數列2)
for 索引 in 集合:
for 數 in range(輸入變數*2+1):
if 數 >= 索引[0] and 數 <= 索引[1]:
print("2", end="")
else:
print("1", end="")
print()
def 列印菱形(輸入變數):
列印星號3(輸入變數)
列印星號4(輸入變數-1)
列印菱形(11)
# 請將本程式改為菱形中間也要列印星號
def 增量列印(行數):
# 基本的 print() 函式列印
# 只要將 4 改為 11, 就可以重複列印 10 行
#for 列印行數 in range(1, 4):
for 列印行數 in range(1, 行數+1):
print("Welcome to Python3 ", end="")
for 列印個數 in range(列印行數):
print("*", end="")
# 完成多星號列印後, 必須額外要求跳行, 否則各行會列在同一行
print()
增量列印(10)
def 菱形(n):
數列1 = [x+n for x in range(0, n)]
數列2 = list(range(n, 0, -1))
數列3 = zip(數列1, 數列2)
for i in 數列3:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:https://120.113.76.47:8888/notebooks/intro_to_python.ipynb#
print(" ", end="")
print()
數列4 = [x for x in range(2, n+1)]
數列5 = [x+n-2 for x in range(n, 0, -1)]
數列6 = zip(數列4, 數列5)
for i in 數列6:
for j in range(2*n):
if j == i[0] or j == i[1]:
print("*", end="")
else:
print(" ", end="")
print()
n = 20
菱形(n)
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def bohachevsky_arg0(sol):
return benchmarks.bohachevsky(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 50)
# ax = Axes3D(fig)
X = np.arange(-15, 15, 0.5)
Y = np.arange(-15, 15, 0.5)
X, Y = np.meshgrid(X, Y)
Z = np.zeros(X.shape)
for i in range(X.shape[0]):
for j in range(X.shape[1]):
Z[i,j] = bohachevsky_arg0((X[i,j],Y[i,j]))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
from mpl_toolkits.mplot3d import Axes3D
from matplotlib import cm
from matplotlib.colors import LogNorm
import matplotlib.pyplot as plt
try:
import numpy as np
except:
exit()
from deap import benchmarks
def untuple(sol):
return benchmarks.himmelblau(sol)[0]
fig = plt.figure()
ax = Axes3D(fig, azim = -29, elev = 49)
X = np.arange(-6, 6, 0.1)
Y = np.arange(-6, 6, 0.1)
X, Y = np.meshgrid(X, Y)
Z = np.array(list(map(untuple, zip(X,Y))))
ax.plot_surface(X, Y, Z, rstride=1, cstride=1, norm=LogNorm(), cmap=cm.jet, linewidth=0.2)
plt.xlabel("x")
plt.ylabel("y")
plt.show()
from NewTurtle import Turtle
t=Turtle()
t.forward(100)
###Output
WARNING: ip.register_post_execute is deprecated, use ip.events.register('post_run_cell', func) instead.
|
MRM.ipynb | ###Markdown
20201215solve magia record mental weight
###Code
import numpy as np
import time
#from itertools import combinations, permutations
import random
LL = lambda x: [[i] for i in x]#list the list
def II(result,iil,Nbr):
'''
iteration rapid increase
iil: II List; Nbr: length of result
To call it, II([[]],LL(range(1,point+2)),Nbr-1)
'''
result2=[]
for x in iil:
#print(result,iil)
result2 += [subset + x for subset in result]
if len(result2[0])==Nbr: return result2
#collecting
ALL=[]
for the in result2:
a=II([the],LL(range(max(the),iil[-1][0]+1)),Nbr)
for a2 in a:
ALL.append(a2)
return ALL
def DP(point,Nbr):
'''
math, distribute points to diff Nbr subplots, from 0 to max. Nbr >=2, point >=2
rep: re-calculate parameters.
'''
c=[]
for i2 in II([[]],LL(range(1,point+2)),Nbr-1):
d=[i2[0]-1]+list(np.diff(i2))+[point-i2[-1]+1]
#f=d.copy()
#f.reverse()
c.append(d)
#c.append(f)
#temp=[]#remove repeated. https://blog.csdn.net/Jerry_1126/article/details/79843751
#[temp.append(i) for i in c if not i in temp]
return c#temp
data=np.loadtxt('HoMaMS.csv','str',delimiter=',')
#data
#data[0,0]=data[0,0][1:]
data
###Output
_____no_output_____
###Markdown
This is for 6 branches girl (e.g. Iroha) Don't just run it see if needed
###Code
# collapse
for i in range(len(data)):
data[i,0]=str(int(data[i,0][0])%3+1)+data[i,0]
data = np.concatenate((data, [['1',9999],['2',9999],['3',9999]]), axis=0)
data
BM={}#branch memory weight
BMt={}# 's text
#DPd={}#distribution dictionary
def SB(data,Vl,point,weights,skipNbr=9999):
'''
data,verify letter (after check this plot), point unused (before check this plot),
weights gained from this point (after check this plot), comments added (from this plot)
example: SB(data,'1','15',300,', the plot name is 1')
skipNbr: if a weights is 9999, then weights = 0 and dont use point
return: 1 or 0 is the if 'waste', if waste, the upper will record the result to increase the speed
'''
global BM, BMt
if point>=4:
if (Vl,point) in list(BM.keys()):
return BM[(Vl,point)],BMt[(Vl,point)], 0
#print(weights)
if point==0: return 0, '', 0
point -= 1
if weights==skipNbr:#skip this plot
point += 1
weights = 0
Vls=data[:,0]#verify letter list
PSBL=[]#search branch list prepare (for distribution)
SBL=[]#search branch list
for i in range(len(Vls)):
iV=Vls[i][:-1]#to compare
if iV==Vl:#record weight distribution
#iweights=
#SBL.append(SB(data,Vls[i],point,weights+float(data[i,1]),co+','+Vls[i]))
PSBL.append(i)
#print(Vl,PSBL)
if len(PSBL)==0:#no subplot
return weights, ', '+Vl, 0#waste number (0 here) occurs when unnessensary SB() occurs
elif len(PSBL)==1:#have 1 subplot
#print(SB(data,Vls[PSBL[0]],point,float(data[PSBL[0],1])))
[PSBLw, PSBLco, ifw]=SB(data,Vls[PSBL[0]],point,float(data[PSBL[0],1]))#if waste
if ifw:
if point>=3:
BM[(Vls[PSBL[0]],point)]=PSBLw
BMt[(Vls[PSBL[0]],point)]=PSBLco
return weights+PSBLw, ', '+Vl+PSBLco, 0
else:#start distribution
if point==0: return weights, ', '+Vl,0#can not distribute
if point==1:#simple distribute
for i2 in PSBL:
#print('i2')
[i2a, i2b, i2c]=SB(data,Vls[i2],1,float(data[i2,1]))
#if i2c:
# if point>=9:
# BM[Vls[i2]]=i2a
# BMt[Vls[i2]]=i2b one point and multiple branch is not waste
SBL.append([i2a, i2b])#deal with SBL with point>1
ifw=0
else:
#print(point,PSBL)
PL=DP(round(point),len(PSBL))#prepare the point list # this is the reason of waste
for thePL in PL:
thePLw=0
thePLco=''
for i4, i5 in zip(thePL, PSBL):
[i4a, i4b, i4c]=SB(data,Vls[i5],i4,float(data[i5,1]))
if i4c:
if point>=3:
BM[(Vls[i5],i4)]=i4a
BMt[(Vls[i5],i4)]=i4b
thePLw += i4a
thePLco = thePLco + i4b
#print('i4')
SBL.append([thePLw, thePLco])
ifw=1
#print(SBL)
Cmax=max([choose[0] for choose in SBL])#check highest weight
for i3 in SBL:
if i3[0]==Cmax:
i3a=weights+Cmax
i3b=', '+Vl+i3[1]
return i3a, i3b, 1
#SB(data,'1',11,0.004200672)
BM={}#branch memory weight
BMt={}# 's text
result=SB(data,'',60,9999)
#result=SB(data,'',60,9999)
print(result)
#or
#print(result.replace(' 1',' ').replace(' 2',' ').replace(' 3',' '))#if uses collapse
#BM
###Output
_____no_output_____
###Markdown
My HoMa:(3.4826342250000004, ', , 1, 11, 112, 1122, 11220, 12, 120, 1200, 12000, 120000, 1200000, 13, 130, 1300, 3, 31, 310, 3100, 31000, 310000, 3100000, 31000000, 310000000, 32, 321, 3212, 32120, 321200, 3212000, 322, 3222, 32220, 322200, 3222000, 32220000, 322200000, 3222000000, 32220000000, 2, 21, 212, 2120, 21200, 212000, 2120000, 21200000, 22, 222, 2220, 22200, 222000, 2220000, 22200000, 222000000, 221, 2210, 22100, 221000, 2210000, 22100000', 1)My Kyouko:(1.9911924290000003, ', , 1, 11, 112, 1121, 11210, 112100, 1121000, 113, 1131, 11310, 113100, 1131000, 11310000, 1132, 11320, 113200, 1132000, 111, 1110, 12, 120, 1200, 12000, 120000, 1200000, 2, 22, 221, 2210, 22100, 21, 212, 2121, 2122, 21222, 211, 2110, 3, 31, 310, 3100, 31000, 32, 320, 3200, 32000, 320000, 33, 331, 3310, 33100, 331000, 3310000, 332, 3322, 3321, 33211, 332110, 3321100, 33212', 1)My Kaede:(1.9040555929999998, ', , 1, 11, 112, 1120, 11201, 112010, 1120100, 11201000, 11202, 112020, 1120200, 11202000, 112020000, 111, 1111, 11110, 111100, 1111000, 11110000, 12, 120, 1200, 12000, 120000, 1200000, 12000000, 120000000, 13, 130, 1300, 13000, 2, 22, 221, 2211, 22110, 221100, 2211000, 22110000, 221100000, 222, 2220, 22201, 222010, 2220100, 22201000, 22202, 222020, 2220200, 22202000, 23, 230, 2300, 23000, 230000, 2300000, 23000000, 230000000, 21, 210', 1)My HomuraG3:(3.254315663,6, 62, 620, 6200, 62000, 620000, 6200000, 61, 610, 6100, 61000, 610000,1, 10, 103, 1030, 10300,4, 40, 401, 4010, 403, 4030, 40300, 403000, 402, 4020,5, 51, 511, 5110, 51100, 511000, 5110000, 52, 520, 5201, 52010, 520100, 5201000, 52010000, 5203, 52030, 520300, 5203000',2, 23, 230, 2300, 23001, 230012, 2300120, 1) My Sayaka:(3.080416398, ', , 1, 11, 112, 1120, 11200, 112000, 1120000, 111, 1110, 11100, 111000, 1110000, 11100000, 111000000, 12, 121, 1210, 12100, 121000, 1210000, 122, 1220, 12200, 122000, 2, 22, 21, 211, 2111, 21110, 211100, 2111000, 21110000, 2112, 21120, 212, 2120, 21200, 212000, 2120000, 21200000, 3, 31, 310, 3100, 32, 322, 3220, 32200, 322000, 3220000, 4, 42, 422, 41, 410, 4100, 41000, 410000, 4100000', 1) uncertained: (from old version)My Homura Glass2: 4, 4, 8, 8, 10, 71 (0.063097892, ', , 1, 10, 101, 1010, 10100, 102')2 (0.016822634, ', , 2, 23, 230, 2300, 23001, 230012, 2300120')3 (0.049430203, ', , 3, 31, 310, 3100')4 (0.056259655, ', , 4, 40, 403, 4030, 40300, 403000, 402, 4020')5 (0.108498016, ', , 5, 51, 512, 5120, 511, 5110, 51100, 511000, 5110000, 52, 520, 5201, 5202, 52020, 520200, 5202000')6 (0.103726234, ', , 6, 61, 610, 6100, 61000, 610000, 6100000, 62, 620, 6200, 62000, 620000')My Homura Glass: 3, 6, 3, 8, 10, 121 0.067570727, ', , 1, 10, 102'2 (0.045710023, ', , 2, 23, 230, 2300, 23001, 230012, 2300120, 23002, 230020, 2300200, 23002000')3 (0.04688377, ', , 3, 31, 310')4 (0.073082289, ', , 4, 40, 403, 4030, 40300, 403000, 402, 4020, 40200, 402000')5 (0.09554173600000002, ', , 5, 51, 511, 5110, 51100, 511000, 5110000, 52, 520, 5201, 5202, 52020, 520200, 5202000')6 (0.14263226, ', , 6, 61, 610, 6100, 61000, 610000, 6100000, 62, 620, 6200, 62000, 620000') (active all)My Iroha: 9, 8, 3, 9, 9, 71 (0.070781878, ', , 1, 10, 101, 1010, 10100, 101000, 1010000, 102, 1020')2 (0.059969647,', , 2, 21, 210, 2101, 21010, 23, 230, 2300, 23002, 230020, 2300200, 23002000')3 (0.022994264, ', , 3, 31, 310')4 (0.064607266, ', , 4, 40, 403, 4030, 40300, 403000, 402, 4020, 40200, 402000, 4020000') (active all)5 (1.062347081, ', , 5, 52, 520, 5202, 52020, 520200, 5202000, 51, 512, 5120, 511, 5110, 51100, 511000, 5110000')6 (0.046014549, ', , 6, 61, 610, 6100, 61000, 610000, 6100000') Use below to do some random and statistic: 4 Branches and 20 points, need 4 min.
###Code
Faster=np.array([range(1,5),[25, 25, 25, 25]])
Dic={}
for i in Faster[0]:
Dic[i]=[]
data2=[]
for i3 in data:
if i3[0][0]==str(i):
data2.append(i3)
for i2 in range(1,Faster[1][i-1]+1):
t=time.time()
Dic[i].append(SB(np.array(data2),'',i2,9999))
print('Dictionary of branch '+str(i)+' in point '+str(i2)+' finishes uses '+str(time.time()-t)+' sec.\n')
Dic
Faster[1]
BG=np.array([sum(Faster[1][:i]) for i in range(4)])#background
BG
for i in range(1,5):
Dic[i].append([0])#a little skill
def R1():#random 1
a=random.random()*100#the number here is the last of BG + the last of faster[1]
return sum((a-BG)>0)
def R2():
a=[]
for i in range(60):#the number here is the point avaliable exluding 9999 (total minus 9999)
a.append(R1())
a=np.array(a)
if sum(a==1)>25 or sum(a==2)>25 or sum(a==3)>25 or sum(a==4)>25:# or sum(a==5)>16 or sum(a==6)>12:
return -9999,[1,1,1,1]
CL=[sum(a==1), sum(a==2), sum(a==3),sum(a==4)]#check list
W=0
for i2 in range(1,5):
W += Dic[i2][CL[i2-1]-1][0]
return W,CL
###Output
_____no_output_____
###Markdown
Need 15 min
###Code
%timeit for i in range(100): R2()
WL=[]
CLL=[]
for i in range(700000):
a,b=R2()
WL.append(a)
CLL.append(b)
max(WL)
WL=np.array(WL)
CLL=np.array(CLL)
CLL[WL==max(WL)]
len(WL)
WL2=WL[WL!=-9999]
len(DP(60,3))
import matplotlib.pyplot as plt
plt.plot(np.sort(WL2,-1)[30:])
plt.title('Homura Glass: Mental Strength Efficient')
plt.xlabel('Random Times')
plt.ylabel('Weight')
a=', , 1, 16, 162, 1620, 16200, 162000, 1620000, 16200000, 161, 1610, 16100, 161000, 1610000, 2, 21, 210, 2103, 21030, 210300, 24, 240, 2401, 24010, 2403, 24030, 240300, 2403000, 2402, 24020, 3, 32, 323, 3230, 32300, 323001, 3230012, 32300120, 35, 351, 3511, 35110, 351100, 3511000, 35110000, 352, 3520, 35201, 352010, 3520100, 35201000, 352010000, 35203, 352030, 3520300, 35203000'
np.save('mgBMt.txt',BMt)
a.replace(' 1',' ').replace(' 2',' ').replace(' 3',' ')
###Output
_____no_output_____ |
COVID19-Cuba.ipynb | ###Markdown
Distribución por edad
###Code
plt.figure(figsize=(10,6))
plt.title("Distribución de edad de los diagnosticados")
sns.kdeplot(data=data['edad'], shade=True).set(xlim=(0))
plt.show()
###Output
_____no_output_____
###Markdown
Distribución por género
###Code
plt.figure(figsize=(10, 6))
plt.title('Género')
data.sexo.value_counts().plot.bar();
###Output
_____no_output_____
###Markdown
Distribución por edad y género
###Code
male_age = data[data.sexo=='hombre']
female_age = data[data.sexo=='mujer']
plt.figure(figsize=(10,6))
plt.title("Distribución por edad y género")
sns.kdeplot(data=female_age['edad'], label="Mujer", shade=True).set(xlim=(0))
sns.kdeplot(data=male_age['edad'],label="Hombre", shade=True).set(xlim=(0))
plt.show()
###Output
_____no_output_____
###Markdown
Cantidad de diagnosticados e ingresados por riesgo
###Code
cant_diagnosticados = []
cant_riesgo = []
for k in range(1, len(data_json['casos']['dias'].keys())+1):
try:
cant_diagnosticados.append(len(data_json['casos']['dias'][str(k)]['diagnosticados']))
except:
cant_diagnosticados.append(0)
try:
cant_riesgo.append(data_json['casos']['dias'][str(k)]['sujetos_riesgo'])
except:
cant_riesgo.append(0)
# Casos de riesgo
plt.figure(figsize=(20, 6))
plt.bar([str(i) for i in range(1,len(cant_riesgo)+1)], cant_riesgo,label="Casos de riesgo")
plt.xlabel('Day')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title('Casos de riesgo',fontsize = 35)
plt.show()
# Casos confirmados
plt.figure(figsize=(20, 6))
plt.bar([str(i) for i in range(1,len(cant_diagnosticados)+1)], cant_diagnosticados,label="Casos confirmados", color='red')
plt.xlabel('Day')
plt.ylabel("Count")
plt.legend(frameon=True, fontsize=12)
plt.title('Casos confirmados',fontsize = 35)
plt.show()
# Plot Compare
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(20, 6))
ax1.plot([str(i) for i in range(1,len(cant_riesgo)+1)], cant_riesgo, zorder=1,color="blue")
ax2.plot([str(i) for i in range(1,len(cant_diagnosticados)+1)], cant_diagnosticados, zorder=1,color="red")
plt.show()
###Output
_____no_output_____
###Markdown
Casos por provincias
###Code
from collections import defaultdict
locations = defaultdict(int)
for l in data.provincia_detección:
locations[l] += 1
# Casos detectados por provincias
plt.figure(figsize=(20, 6))
plt.bar([str(l) for l in locations], [locations[l] for l in locations], color='orange')
plt.xlabel('Province')
plt.ylabel("Count")
plt.title('Casos detectados por provincias',fontsize = 35)
plt.show()
###Output
_____no_output_____
###Markdown
Tipo de contagio
###Code
contagio = defaultdict(int)
for c in data.contagio:
contagio[c] += 1
plt.figure(figsize=(10, 6))
plt.bar([str(c) for c in contagio], [contagio[c] for c in contagio])
plt.title('Tipos de contagios',fontsize = 35)
plt.show()
procedencia = defaultdict(int)
for pl in data.posible_procedencia_contagio:
for p in pl:
procedencia[p] += 1
plt.figure(figsize=(20,6))
plt.bar([str(p) for p in procedencia], [procedencia[p] for p in procedencia])
plt.title('Posible procedencia de contagio',fontsize = 35)
plt.show()
###Output
_____no_output_____
###Markdown
Casos detectados (frecuencia acumulada)
###Code
acDetectados = []
for i, c in enumerate(cant_diagnosticados):
if i == 0:
acDetectados.append(c)
else:
acDetectados.append(c+acDetectados[-1])
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,6))
ax1.bar([str(i) for i in range(1, len(acDetectados)+1)],acDetectados)
ax2.plot([str(i) for i in range(1, len(acDetectados)+1)],acDetectados)
plt.show()
###Output
_____no_output_____
###Markdown
Casos de riesgo (frecuencia acumulada)
###Code
acRiesgo = []
for i, c in enumerate(cant_riesgo):
if i == 0:
acRiesgo.append(c)
else:
acRiesgo.append(c+acRiesgo[-1])
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,6))
ax1.bar([str(i) for i in range(1, len(acRiesgo)+1)],acRiesgo)
ax2.plot([str(i) for i in range(1, len(acRiesgo)+1)],acRiesgo)
plt.show()
###Output
_____no_output_____
###Markdown
Test realizados vs Casos Detectados (a partir del día 12)
###Code
cant_tests = []
for k in range(12, len(data_json['casos']['dias'].keys())+1):
cant_tests.append(data_json['casos']['dias'][str(k)]['tests_total'])
prop_test_vs_detected = []
detected_acc = []
for i, c in enumerate(cant_tests):
detected_acc.append(sum(cant_diagnosticados[:11+i]))
prop_test_vs_detected.append(round(detected_acc[-1] / c, 2))
_, (ax1, ax2) = plt.subplots(1, 2, figsize=(20,6))
# Test realizados
ax1.bar([str(k) for k in range(12, len(data_json['casos']['dias'].keys())+1)], cant_tests)
ax1.bar([str(k) for k in range(12, len(data_json['casos']['dias'].keys())+1)], detected_acc)
# Proporción entre casos confirmados y test realizados
ax2.bar([str(k) for k in range(12, len(data_json['casos']['dias'].keys())+1)], prop_test_vs_detected)
plt.title('Test acumulados por días && Proporción Detectados/Test')
plt.show()
###Output
_____no_output_____ |
Fraucheck.ipynb | ###Markdown
**Importing liabray**
###Code
import pandas as pd
import numpy as np
#data loading
df=pd.read_csv("C:\\Users\\ACER\\Downloads\\Fraud_check(2).csv")
df.head()
#checking null values
df.info()
df.isnull().sum()
#convert categorical data into numeric
from sklearn.preprocessing import LabelEncoder
Encoder=LabelEncoder()
df['Undergrad']=Encoder.fit_transform(df["Undergrad"])
df['Marital.Status']=Encoder.fit_transform(df["Marital.Status"])
df['Urban']=Encoder.fit_transform(df["Urban"])
df1=df.copy()
df1.head(2)
###Output
_____no_output_____
###Markdown
treating those who have taxable_income <= 30000 as "Risky" and others are "Good"0=GOOD1=Risky
###Code
#creating target variable in 0s & 1s 0=GOOD 1=Risky
df1['Taxable.Income']=(df1['Taxable.Income']<=30000)*1
df1['Taxable.Income'].unique()
df1.head()
#dividing data into x & y variable
x=df.iloc[:,[0,1,3,4,5]]
y=df1['Taxable.Income']
#spliting data into training and testing
from sklearn.model_selection import train_test_split
x_train,x_test,y_train,y_test=train_test_split(x,y,test_size=0.3,random_state=23)
###Output
_____no_output_____
###Markdown
**Building Decision Tree Classifier using Entropy Criteria[C5.0]**
###Code
from sklearn.tree import DecisionTreeClassifier
model=DecisionTreeClassifier(criterion="entropy",max_depth=3)
model.fit(x_train,y_train)
#plot the decision tree
from sklearn import tree
import matplotlib.pyplot as plt
tree.plot_tree(model);
model.feature_importances_
#import features are [City.Population ;Work.Experience]
# predicting on test data set
preds = model.predict(x_test)
# getting the count of each category
pd.Series(preds).value_counts()
pd.crosstab(y_test,preds)
# Accuracy
np.mean(preds==y_test)*100
###Output
_____no_output_____
###Markdown
Decision tree model {C5.0} by max_depth=3 it gives Accuracy 81.11%model predicting Good 145 out of 177Risky 1 out of 3 Building Decision Tree Classifier (CART) using Gini Criteria
###Code
model_gini=DecisionTreeClassifier(criterion="gini",max_depth=2)
model_gini.fit(x_train,y_train)
pred=model_gini.predict(x_test)
pred
# Accuracy
np.mean(pred==y_test)
pd.crosstab(y_test,pred)
###Output
_____no_output_____ |
Machine_Learning/Weighted Least Squares Notebook.ipynb | ###Markdown
Weighted Least Squares By: Jonathan JohannemannBelow is just a notebook of me taking a look at weighted least squares. I've decided to combine commentary from the actual notebook with my own explanation for what weighted least squares achieves.
###Code
import numpy as np
from scipy import stats
import statsmodels.api as sm
import matplotlib.pyplot as plt
from IPython.display import Image
from statsmodels.sandbox.regression.predstd import wls_prediction_std
from statsmodels.iolib.table import (SimpleTable, default_txt_fmt)
np.random.seed(1024)
%matplotlib inline
###Output
_____no_output_____
###Markdown
In the statsmodels notebook, they don't really talk too much about what Weighted Least Squares actually is. Therefore, I decided to take some of the things I learned from reading https://onlinecourses.science.psu.edu/stat501/node/352. First, let's take a look at how WLS differs from OLS
###Code
Image(filename='Pictures/ols_formula.png')
###Output
_____no_output_____
###Markdown
As seen above, we see the equation for OLS. Following this, we want to find a beta as shown below.
###Code
Image(filename='Pictures/beta_formula.png')
###Output
_____no_output_____
###Markdown
Okay, now we are going to consider e. What weighted least squares basically says is that points on the regression with less variance have more information than points on the regression that have higher variance. This phenomenon is known as heteroscedasticity when the variance of the linear regression residuals are not consistent. For the error term, we usually assume that the errors are normally distributed with mean vector 0 and now we have a nonconstant variance-covariance matrix. This is shown below.
###Code
Image(filename='Pictures/nonconstant_var_cov_mat.png')
###Output
_____no_output_____
###Markdown
The next part is the essence of weighted least squares.
###Code
Image(filename='Pictures/w_mat.png')
###Output
_____no_output_____
###Markdown
The w values shown above are equal to 1/[variance value in that column in the matrix above].
###Code
Image(filename='Pictures/wls.png')
###Output
_____no_output_____
###Markdown
As shown above, weighted least squares uses the inverse of the variance to weight values. We can see that weighted least squares simply incorporates the variance of the corresponding error values.The observations below now follow and this is taken directly from Penn Stat 501:* Since each weight is inversely proportional to the error variance, it reflects the information in that observation. So, an observation with small error variance has a large weight since it contains relatively more information than an observation with large error variance (small weight).* The weights have to be known (or more usually estimated) up to a proportionality constant. WLS Estimation Artificial data: Heteroscedasticity 2 groups Model assumptions: * Misspecification: true model is quadratic, estimate only linear * Independent noise/error term * Two groups for error variance, low and high variance groups
###Code
nsample = 50
x = np.linspace(0, 20, nsample)
X = np.column_stack((x, (x - 5)**2))
X = sm.add_constant(X)
beta = [5., 0.5, -0.01]
sig = 0.5
w = np.ones(nsample)
w[nsample * 6//10:] = 3
y_true = np.dot(X, beta)
e = np.random.normal(size=nsample)
y = y_true + sig * w * e
X = X[:,[0,1]]
###Output
_____no_output_____
###Markdown
So the guys at statsmodels decided to introduce a situation where WLS is better than OLS. We can see that the real function is not linear and the error terms get larger towards the end.Okay, let's start with OLS.
###Code
res_ols = sm.OLS(y, X).fit()
print(res_ols.summary())
###Output
OLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.879
Model: OLS Adj. R-squared: 0.876
Method: Least Squares F-statistic: 347.7
Date: Sun, 25 Dec 2016 Prob (F-statistic): 1.25e-23
Time: 22:12:36 Log-Likelihood: -68.470
No. Observations: 50 AIC: 140.9
Df Residuals: 48 BIC: 144.8
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 5.2426 0.271 19.370 0.000 4.698 5.787
x1 0.4349 0.023 18.647 0.000 0.388 0.482
==============================================================================
Omnibus: 10.697 Durbin-Watson: 2.200
Prob(Omnibus): 0.005 Jarque-Bera (JB): 29.315
Skew: 0.153 Prob(JB): 4.31e-07
Kurtosis: 6.739 Cond. No. 23.0
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
We can see that the x1 variable has a significant t value and seems to do a fairly good job with a good Adj. R-squared value of 0.783. Now let's see how WLS compares to OLS when WLS knows the true variance ratio of heteroscedasticity.
###Code
mod_wls = sm.WLS(y, X, weights=1./w)
res_wls = mod_wls.fit()
print(res_wls.summary())
###Output
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.910
Model: WLS Adj. R-squared: 0.909
Method: Least Squares F-statistic: 487.9
Date: Sun, 25 Dec 2016 Prob (F-statistic): 8.52e-27
Time: 22:12:37 Log-Likelihood: -57.048
No. Observations: 50 AIC: 118.1
Df Residuals: 48 BIC: 121.9
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 5.2726 0.185 28.488 0.000 4.900 5.645
x1 0.4379 0.020 22.088 0.000 0.398 0.478
==============================================================================
Omnibus: 5.040 Durbin-Watson: 2.242
Prob(Omnibus): 0.080 Jarque-Bera (JB): 6.431
Skew: 0.024 Prob(JB): 0.0401
Kurtosis: 4.756 Cond. No. 17.0
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
We can clearly see a difference between the two models. The t statistic for x1 is more significant with WLS and the Adj. R-Squared is even higher at 0.842.
###Code
print(res_ols.params)
print(res_wls.params)
###Output
[ 5.24256099 0.43486879]
[ 5.27260714 0.43794441]
###Markdown
As a quick model comparison, we can see that WLS does not change the model too dramatically in this example. But, it is clear that the WLS model is pushing more emphasis to the x1 value as opposed to the intercept which can be observed in the OLS model.
###Code
covb = res_ols.cov_params()
prediction_var = res_ols.mse_resid + (X * np.dot(covb,X.T).T).sum(1)
prediction_std = np.sqrt(prediction_var)
tppf = stats.t.ppf(0.975, res_ols.df_resid)
prstd_ols, iv_l_ols, iv_u_ols = wls_prediction_std(res_ols)
prstd, iv_l, iv_u = wls_prediction_std(res_wls)
fig, ax = plt.subplots(figsize=(8,6))
ax.plot(x, y, 'o', label="Data")
ax.plot(x, y_true, 'b-', label="True")
# OLS
ax.plot(x, res_ols.fittedvalues, 'r--')
ax.plot(x, iv_u_ols, 'r--', label="OLS")
ax.plot(x, iv_l_ols, 'r--')
# WLS
ax.plot(x, res_wls.fittedvalues, 'g--.')
ax.plot(x, iv_u, 'g--', label="WLS")
ax.plot(x, iv_l, 'g--')
ax.legend(loc="best");
###Output
_____no_output_____
###Markdown
Above, we can see how the confidence intervals changes with WLS once we hit the portion of data that has a higher variance. WLS posits that the first segment of data points offers more information than the latter. However, when we know that the assumption of homoscedasticity in OLS has been violated, we need to make sure to double check WLS when estimates are being made. Clearly, we have a higher confidence in our predictions in the first segment as opposed to the later segment that has more variance. What is Feasible Weighted Least Squares? (2-stage FWLS)Instead of minimizing with respect to an estimated variance-covariance matrix W where the off diagonal values are 0. Feasible Weighted Least Squares, also known as "General Least Squares" from what I've seen so far, optimizes with respect to the Mahalanobis distance. The reason that GLS is often helpful is because it is very difficult to come up with the W matrix in practice.
###Code
resid1 = res_ols.resid[w==1.]
var1 = resid1.var(ddof=int(res_ols.df_model)+1)
resid2 = res_ols.resid[w!=1.]
var2 = resid2.var(ddof=int(res_ols.df_model)+1)
w_est = w.copy()
w_est[w!=1.] = np.sqrt(var2) / np.sqrt(var1)
res_fwls = sm.WLS(y, X, 1./w_est).fit()
print(res_fwls.summary())
###Output
WLS Regression Results
==============================================================================
Dep. Variable: y R-squared: 0.914
Model: WLS Adj. R-squared: 0.912
Method: Least Squares F-statistic: 507.1
Date: Sun, 25 Dec 2016 Prob (F-statistic): 3.65e-27
Time: 22:21:17 Log-Likelihood: -55.777
No. Observations: 50 AIC: 115.6
Df Residuals: 48 BIC: 119.4
Df Model: 1
Covariance Type: nonrobust
==============================================================================
coef std err t P>|t| [95.0% Conf. Int.]
------------------------------------------------------------------------------
const 5.2710 0.177 29.828 0.000 4.916 5.626
x1 0.4390 0.019 22.520 0.000 0.400 0.478
==============================================================================
Omnibus: 4.076 Durbin-Watson: 2.251
Prob(Omnibus): 0.130 Jarque-Bera (JB): 4.336
Skew: 0.003 Prob(JB): 0.114
Kurtosis: 4.443 Cond. No. 16.5
==============================================================================
Warnings:
[1] Standard Errors assume that the covariance matrix of the errors is correctly specified.
###Markdown
As we can see, the performance is even better than that of normal WLS (the difficult to implement in practice scenario). The Feasible WLS model provides an Adj. R-squared of 0.912 which is the best so far. Of course, these are just preliminary model evaluations and require cross validation sets; but, we can see that it does appear that Feasible WLS does the best out of these three models. So, what does the beta look like? (Assuming that we are right that this is GLS) Like this:
###Code
Image(filename="Pictures/gls.png")
###Output
_____no_output_____ |
transit_lc.ipynb | ###Markdown
https://nssdc.gsfc.nasa.gov/planetary/factsheet/http://www.met.rdg.ac.uk/~ross/Astronomy/Planets.htmlhttps://www.princeton.edu/~willman/planetary_systems/Sol/
###Code
observer_inclination = -0.1
u_ld = [0.24, 0.36]
mercury = TransitParams()
mercury.per = 0.3870993 * 365.25
mercury.a = float(0.3870993*u.AU/R_sun)
mercury.rp = float(4879/2*u.km/R_sun)
mercury.inc = 90 + observer_inclination
mercury.ecc = 0.205
mercury.w = 77.45645
mercury.limb_dark = 'quadratic'
mercury.u = u_ld
venus = TransitParams()
venus.per = 0.61519726 * 365.25
venus.a = float(0.723336*u.AU/R_sun)
venus.rp = float(12104/2*u.km/R_sun)
venus.inc = 90 + observer_inclination
venus.ecc = 0.00678
venus.w = 131.53298
venus.limb_dark = 'quadratic'
venus.u = u_ld
earth = TransitParams()
earth.per = 365.25
earth.a = float(1.000003*u.AU/R_sun)
earth.rp = float(12756/2*u.km/R_sun)
earth.inc = 90 + observer_inclination
earth.ecc = 0.01671
earth.w = 102.94719
earth.limb_dark = 'quadratic'
earth.u = u_ld
ref_time = Time('2000-01-01')
mercury_coord = get_body('mercury', ref_time).transform_to(HeliocentricTrueEcliptic)
venus_coord = get_body('venus', ref_time).transform_to(HeliocentricTrueEcliptic)
earth_coord = get_body('earth', ref_time).transform_to(HeliocentricTrueEcliptic)
mercury.t0 = float((earth_coord.lon - mercury_coord.lon) / (2*np.pi * u.rad)) * mercury.per
venus.t0 = float((earth_coord.lon - venus_coord.lon) / (2*np.pi * u.rad)) * venus.per
earth.t0 = float((earth_coord.lon - earth_coord.lon) / (2*np.pi * u.rad)) * earth.per
mercury.t0
from fleck import Star, generate_spots
times = np.linspace(0, 4*365.25, 100000)
n_spots = 10
lons, lats, rads, inc_stellar = generate_spots(min_latitude=-5, max_latitude=-30,
spot_radius=0.03, n_spots=n_spots,
inclinations=np.array([90])*u.deg)
import sys
sys.path.insert(0, '/Users/bmmorris/git/shocksgo')
from shocksgo import generate_solar_fluxes
sun = Star(spot_contrast=0.7, u_ld=u_ld, rotation_period=26.2)
sun.plot(lons, lats, rads, inc_stellar, time=0, time_ref=0, planet=[mercury, venus, earth])
min_time = -100
max_time = min_time + 4*365.25
window = 1.5
duration = 1
for params, name in zip([mercury, venus, earth], 'Mercury Venus Earth'.split()):
next_100_transits = np.arange(100)*params.per + params.t0
midtransit_times = next_100_transits[next_100_transits < max_time]
fig, ax = plt.subplots(1, 2, figsize=(14, 4))
fig.suptitle(name)
for midtransit_time in midtransit_times:
t = np.arange(midtransit_time-window, midtransit_time+window, 45/60/60/24)
oot = (t < midtransit_time - duration) | (t > midtransit_time + duration)
transit = sun.light_curve(lons, lats, rads, inc_stellar[0], times=t, planet=params, time_ref=0, fast=True)[:, 0]
noise_t, noise_f, noise_kernel = generate_solar_fluxes((len(t) * 45/60/60/24)*u.day, cadence=45*u.s)
fit = np.polyval(np.polyfit(t[oot] - t.mean(), transit[oot], 3), t-t.mean())
ax[0].plot(t - t.mean(), transit/fit + noise_f)
plt.show()
###Output
_____no_output_____ |
docs/tutorials/cluster/MVKMeans/MultiviewKMeans_Tutorial.ipynb | ###Markdown
Multi-view KMeans
###Code
from mvlearn.datasets import load_UCImultifeature
from mvlearn.cluster import MultiviewKMeans
from sklearn.cluster import KMeans
import numpy as np
from sklearn.manifold import TSNE
from sklearn.metrics import normalized_mutual_info_score as nmi_score
import matplotlib.pyplot as plt
%matplotlib inline
import warnings
warnings.filterwarnings("ignore")
RANDOM_SEED=5
###Output
_____no_output_____
###Markdown
Load in UCI digits multiple feature data set as an example
###Code
# Load dataset along with labels for digits 0 through 4
n_class = 5
data, labels = load_UCImultifeature(select_labeled = list(range(n_class)))
# Just get the first two views of data
m_data = data[:2]
# Helper function to display data and the results of clustering
def display_plots(pre_title, data, labels):
# plot the views
plt.figure()
fig, ax = plt.subplots(1,2, figsize=(14,5))
dot_size=10
ax[0].scatter(data[0][:, 0], data[0][:, 1],c=labels,s=dot_size)
ax[0].set_title(pre_title + ' View 1')
ax[0].axes.get_xaxis().set_visible(False)
ax[0].axes.get_yaxis().set_visible(False)
ax[1].scatter(data[1][:, 0], data[1][:, 1],c=labels,s=dot_size)
ax[1].set_title(pre_title + ' View 2')
ax[1].axes.get_xaxis().set_visible(False)
ax[1].axes.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Single-view and multi-view clustering of the data with 2 viewsHere we will compare the performance of the Multi-view and Single-view versions of kmeans clustering. We will evaluate the purity of the resulting clusters from each algorithm with respect to the class labels using the normalized mutual information metric. As we can see, Multi-view clustering produces clusters with higher purity compared to those produced by clustering on just a single view or by clustering the two views concatenated together.
###Code
#################Single-view kmeans clustering#####################
# Cluster each view separately
s_kmeans = KMeans(n_clusters=n_class, random_state=RANDOM_SEED)
s_clusters_v1 = s_kmeans.fit_predict(m_data[0])
s_clusters_v2 = s_kmeans.fit_predict(m_data[1])
# Concatenate the multiple views into a single view
s_data = np.hstack(m_data)
s_clusters = s_kmeans.fit_predict(s_data)
# Compute nmi between true class labels and single-view cluster labels
s_nmi_v1 = nmi_score(labels, s_clusters_v1)
s_nmi_v2 = nmi_score(labels, s_clusters_v2)
s_nmi = nmi_score(labels, s_clusters)
print('Single-view View 1 NMI Score: {0:.3f}\n'.format(s_nmi_v1))
print('Single-view View 2 NMI Score: {0:.3f}\n'.format(s_nmi_v2))
print('Single-view Concatenated NMI Score: {0:.3f}\n'.format(s_nmi))
#################Multi-view kmeans clustering######################
# Use the MultiviewKMeans instance to cluster the data
m_kmeans = MultiviewKMeans(n_clusters=n_class, random_state=RANDOM_SEED)
m_clusters = m_kmeans.fit_predict(m_data)
# Compute nmi between true class labels and multi-view cluster labels
m_nmi = nmi_score(labels, m_clusters)
print('Multi-view NMI Score: {0:.3f}\n'.format(m_nmi))
###Output
Single-view View 1 NMI Score: 0.635
Single-view View 2 NMI Score: 0.746
Single-view Concatenated NMI Score: 0.746
Multi-view NMI Score: 0.770
###Markdown
Plot clusters produced by multi-view spectral clustering and the true clustersWe will display the clustering results of the Multi-view kmeans clustering algorithm below, along with the true class labels.
###Code
# Running TSNE to display clustering results via low dimensional embedding
tsne = TSNE()
new_data_1 = tsne.fit_transform(m_data[0])
new_data_2 = tsne.fit_transform(m_data[1])
display_plots('Multi-view KMeans Clusters', m_data, m_clusters)
display_plots('True Labels', m_data, labels)
###Output
_____no_output_____
###Markdown
Spectral clustering with different parametersHere we will again compare the performance of the Multi-view and Single-view versions of kmeans clusteringon data with 2 views. We will follow a similar procedure as before, but we will be using a different configuration of parameters for Multi-view Spectral Clustering. Again, we can see that Multi-view clustering produces clusters with higher purity compared to those produced by clustering on just a single view or by clustering the two views concatenated together.
###Code
#################Single-view kmeans clustering#####################
# Cluster each view separately
s_kmeans = KMeans(n_clusters=n_class, random_state=RANDOM_SEED)
s_clusters_v1 = s_kmeans.fit_predict(m_data[0])
s_clusters_v2 = s_kmeans.fit_predict(m_data[1])
# Concatenate the multiple views into a single view
s_data = np.hstack(m_data)
s_clusters = s_kmeans.fit_predict(s_data)
# Compute nmi between true class labels and single-view cluster labels
s_nmi_v1 = nmi_score(labels, s_clusters_v1)
s_nmi_v2 = nmi_score(labels, s_clusters_v2)
s_nmi = nmi_score(labels, s_clusters)
print('Single-view View 1 NMI Score: {0:.3f}\n'.format(s_nmi_v1))
print('Single-view View 2 NMI Score: {0:.3f}\n'.format(s_nmi_v2))
print('Single-view Concatenated NMI Score: {0:.3f}\n'.format(s_nmi))
#################Multi-view kmeans clustering######################
# Use the MultiviewKMeans instance to cluster the data
m_kmeans = MultiviewKMeans(n_clusters=n_class,
n_init=10, max_iter=6, patience=2, random_state=RANDOM_SEED)
m_clusters = m_kmeans.fit_predict(m_data)
# Compute nmi between true class labels and multi-view cluster labels
m_nmi = nmi_score(labels, m_clusters)
print('Multi-view NMI Score: {0:.3f}\n'.format(m_nmi))
###Output
Single-view View 1 NMI Score: 0.635
Single-view View 2 NMI Score: 0.746
Single-view Concatenated NMI Score: 0.746
Multi-view NMI Score: 0.747
###Markdown
Using the Multi-view KMeans Clustering Algorithm to Cluster Data with Multiple Views
###Code
from mvlearn.datasets.base import load_UCImultifeature
from mvlearn.cluster.mv_k_means import MultiviewKMeans
from sklearn.cluster import KMeans
import numpy as np
from sklearn.manifold import TSNE
from sklearn.metrics import normalized_mutual_info_score as nmi_score
import matplotlib.pyplot as plt
RANDOM_SEED=5
###Output
_____no_output_____
###Markdown
Load in UCI digits multiple feature dataset as an example
###Code
# Load dataset along with labels for digits 0 through 4
n_class = 5
data, labels = load_UCImultifeature(select_labeled = list(range(n_class)))
# Just get the first two views of data
m_data = data[:2]
###Output
_____no_output_____
###Markdown
Creating a function to display data and the results of clustering
###Code
def display_plots(pre_title, data, labels):
# plot the views
plt.figure()
fig, ax = plt.subplots(1,2, figsize=(14,5))
dot_size=10
ax[0].scatter(data[0][:, 0], data[0][:, 1],c=labels,s=dot_size)
ax[0].set_title(pre_title + ' View 1')
ax[0].axes.get_xaxis().set_visible(False)
ax[0].axes.get_yaxis().set_visible(False)
ax[1].scatter(data[1][:, 0], data[1][:, 1],c=labels,s=dot_size)
ax[1].set_title(pre_title + ' View 2')
ax[1].axes.get_xaxis().set_visible(False)
ax[1].axes.get_yaxis().set_visible(False)
plt.show()
###Output
_____no_output_____
###Markdown
Running Multi-view KMeans Clustering on the Data with 2 ViewsHere we will compare the performance of the Multi-view and Single-view versions of kmeans clustering. We will evaluate the purity of the resulting clusters from each algorithm with respect to the class labels using the normalized mutual information metric. As we can see, Multi-view clustering produces clusters with higher purity compared to those produced by clustering on just a single view or by clustering the two views concatenated together.
###Code
#################Single-view kmeans clustering#####################
# Cluster each view separately
s_kmeans = KMeans(n_clusters=n_class, random_state=RANDOM_SEED)
s_clusters_v1 = s_kmeans.fit_predict(m_data[0])
s_clusters_v2 = s_kmeans.fit_predict(m_data[1])
# Concatenate the multiple views into a single view
s_data = np.hstack(m_data)
s_clusters = s_kmeans.fit_predict(s_data)
# Compute nmi between true class labels and single-view cluster labels
s_nmi_v1 = nmi_score(labels, s_clusters_v1)
s_nmi_v2 = nmi_score(labels, s_clusters_v2)
s_nmi = nmi_score(labels, s_clusters)
print('Single-view View 1 NMI Score: {0:.3f}\n'.format(s_nmi_v1))
print('Single-view View 2 NMI Score: {0:.3f}\n'.format(s_nmi_v2))
print('Single-view Concatenated NMI Score: {0:.3f}\n'.format(s_nmi))
#################Multi-view kmeans clustering######################
# Use the MultiviewKMeans instance to cluster the data
m_kmeans = MultiviewKMeans(n_clusters=n_class, random_state=RANDOM_SEED)
m_clusters = m_kmeans.fit_predict(m_data)
# Compute nmi between true class labels and multi-view cluster labels
m_nmi = nmi_score(labels, m_clusters)
print('Multi-view NMI Score: {0:.3f}\n'.format(m_nmi))
###Output
Single-view View 1 NMI Score: 0.633
Single-view View 2 NMI Score: 0.753
Single-view Concatenated NMI Score: 0.753
Multi-view NMI Score: 0.778
###Markdown
Plots of clusters produced by multi-view spectral clustering and the true clustersWe will display the clustering results of the Multi-view kmeans clustering algorithm below, along with the true class labels.
###Code
# Running TSNE to display clustering results via low dimensional embedding
tsne = TSNE()
new_data_1 = tsne.fit_transform(m_data[0])
new_data_2 = tsne.fit_transform(m_data[1])
display_plots('Multi-view KMeans Clusters', m_data, m_clusters)
display_plots('True Labels', m_data, labels)
###Output
_____no_output_____
###Markdown
Running Multi-view Spectral Clustering on the Data with Different ParametersHere we will again compare the performance of the Multi-view and Single-view versions of kmeans clusteringon data with 2 views. We will follow a similar procedure as before, but we will be using a different configuration of parameters for Multi-view Spectral Clustering. Again, we can see that Multi-view clustering produces clusters with higher purity compared to those produced by clustering on just a single view or by clustering the two views concatenated together.
###Code
#################Single-view kmeans clustering#####################
# Cluster each view separately
s_kmeans = KMeans(n_clusters=n_class, random_state=RANDOM_SEED)
s_clusters_v1 = s_kmeans.fit_predict(m_data[0])
s_clusters_v2 = s_kmeans.fit_predict(m_data[1])
# Concatenate the multiple views into a single view
s_data = np.hstack(m_data)
s_clusters = s_kmeans.fit_predict(s_data)
# Compute nmi between true class labels and single-view cluster labels
s_nmi_v1 = nmi_score(labels, s_clusters_v1)
s_nmi_v2 = nmi_score(labels, s_clusters_v2)
s_nmi = nmi_score(labels, s_clusters)
print('Single-view View 1 NMI Score: {0:.3f}\n'.format(s_nmi_v1))
print('Single-view View 2 NMI Score: {0:.3f}\n'.format(s_nmi_v2))
print('Single-view Concatenated NMI Score: {0:.3f}\n'.format(s_nmi))
#################Multi-view kmeans clustering######################
# Use the MultiviewKMeans instance to cluster the data
m_kmeans = MultiviewKMeans(n_clusters=n_class,
n_init=10, max_iter=6, patience=2, random_state=RANDOM_SEED)
m_clusters = m_kmeans.fit_predict(m_data)
# Compute nmi between true class labels and multi-view cluster labels
m_nmi = nmi_score(labels, m_clusters)
print('Multi-view NMI Score: {0:.3f}\n'.format(m_nmi))
###Output
Single-view View 1 NMI Score: 0.633
Single-view View 2 NMI Score: 0.753
Single-view Concatenated NMI Score: 0.753
Multi-view NMI Score: 0.778
|
scripts/.ipynb_checkpoints/results-checkpoint.ipynb | ###Markdown
Real-world results occupancy on-street (NPR)
###Code
filename = 'NPR_PRC_Stacked_Occupation.csv'
folder = r'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/datascience-thesis/data/npr/'
path = folder + filename
npr_occ = pd.read_csv(path,sep=';')
npr_occ['B_TYD_V_RECHT'] = pd.to_datetime(npr_occ['B_TYD_V_RECHT'])
npr_occ['E_TYD_V_RECHT'] = pd.to_datetime(npr_occ['E_TYD_V_RECHT'])
npr_occ = npr_occ.loc[npr_occ['buurtcode'] == 'K24c']
hours = []
for i in range(23+1):
hours.append('hour_'+str(i))
hour_columns = hours
npr_occ[hours] = npr_occ[hours].astype(int)
npr_occ['hour_17'].max()
npr_fhbuurt = npr_occ
npr_fhbuurt.head()
dates = (npr_fhbuurt['B_TYD_V_RECHT'] > '2018-01-01')
npr_fhbuurt = npr_fhbuurt.loc[dates]
may = ((npr_fhbuurt['B_TYD_V_RECHT'] > '2018-03-01') & (npr_fhbuurt['B_TYD_V_RECHT'] < '2018-05-15'))
npr_fhbuurt_may = npr_fhbuurt.loc[may]
npr_fhbuurt_may['day_of_week'] = npr_fhbuurt_may['B_TYD_V_RECHT'].dt.dayofweek
npr_fhbuurt_may = npr_fhbuurt_may.loc[npr_fhbuurt_may['day_of_week'].isin([1,2,3,4,5])]
oct = ((npr_fhbuurt['B_TYD_V_RECHT'] > '2018-05-15') & (npr_fhbuurt['B_TYD_V_RECHT'] < '2018-10-01'))
npr_fhbuurt_oct = npr_fhbuurt.loc[oct]
npr_fhbuurt_oct['day_of_week'] = npr_fhbuurt_oct['B_TYD_V_RECHT'].dt.dayofweek
npr_fhbuurt_oct = npr_fhbuurt_oct.loc[npr_fhbuurt_oct['day_of_week'].isin([1,2,3,4,5])]
feb = ((npr_fhbuurt['B_TYD_V_RECHT'] > '2019-02-01'))
npr_fhbuurt_feb = npr_fhbuurt.loc[feb]
npr_fhbuurt_feb['day_of_week'] = npr_fhbuurt_feb['B_TYD_V_RECHT'].dt.dayofweek
npr_fhbuurt_feb = npr_fhbuurt_feb.loc[npr_fhbuurt_feb['day_of_week'].isin([1,2,3,4,5])]
npr_fhbuurt['total'] = npr_fhbuurt.sum(axis=1)
npr_fhbuurt = npr_fhbuurt.loc[npr_fhbuurt['total'] > 0]
npr_fhbuurt['total'].mean()
may_mean = npr_fhbuurt_may.mean().to_frame(name = 'may')
oct_mean = npr_fhbuurt_oct.mean().to_frame(name = 'oct')
feb_mean = npr_fhbuurt_feb.mean().to_frame(name = 'feb')
may_mean.head()
npr_fhbuurt_feb.head()
mean_occ = pd.concat([may_mean, oct_mean,feb_mean], axis = 1)
mean_occ.index =mean_occ.index.str[5:]
mean_occ.head()
print(mean_occ.may.max())
print(mean_occ.oct.max())
print(mean_occ.feb.max())
figure(figsize=(7,3))
x = mean_occ.index[:24]
y1 = mean_occ.may[:24]
y2 = mean_occ.oct[:24]
y3 = mean_occ.feb[:24]
plt.plot(x,y1,alpha=0.6,linewidth=3, label='may 2018')
plt.plot(x,y2,alpha=0.6,linewidth=3, label='oct 2018')
plt.plot(x,y3,alpha=0.6,linewidth=3,c='C3', label='feb 2019')
# plt.xticks(np.arange(min(x), max(x), 1.0))
plt.margins(x=0.01)
plt.xlabel('Hour')
plt.ylabel('Delta occupancy on-street')
plt.legend()
plt.savefig('delta_occupancy_onstreet_real.png')
###Output
_____no_output_____
###Markdown
occupancy off-street
###Code
filename = 'milou_AC.xlsx'
folder = r'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/'
path = folder + filename
raw_garagedata = pd.read_excel(path,sheet_name='ParkingTransaction')
raw_garagedata['Entry_DT_UTC'] = pd.to_datetime(raw_garagedata['Entry_DT_UTC'])
raw_garagedata['Exit_DT_UTC'] = pd.to_datetime(raw_garagedata['Exit_DT_UTC'])
oct_scenario = ['oct','2018-09-01','2018-09-30']
feb_scenario = ['feb','2019-02-01','2019-02-28']
def get_occupancy_offstreet_real(df,scenario):
month = scenario[0]
start = scenario[1]
stop = scenario[2]
raw_data = df[(df['Entry_DT_UTC'] > start) & (df['Entry_DT_UTC'] < stop)]
raw_data['PermitHolder'] = np.where(raw_data['SubscriptionModel'] == 'SHPV', 1,0)
raw_data['Visitor'] = np.where((raw_data['SubscriptionModel'] == 'Passanten') | (raw_data['SubscriptionModel'] == 'Passanten Dip In Uit'), 1,0)
raw_data_entry = raw_data.set_index(['Entry_DT_UTC'])
raw_data_exit = raw_data.set_index(['Exit_DT_UTC'])
aggs_raw_permitholders_entry = raw_data_entry.resample('H').apply({'PermitHolder':'sum'})
aggs_raw_visitors_entry = raw_data_entry.resample('H').apply({'Visitor':'sum'})
aggs_raw_permitholders_exit = raw_data_exit.resample('H').apply({'PermitHolder':'sum'})
aggs_raw_visitors_exit = raw_data_exit.resample('H').apply({'Visitor':'sum'})
aggs_raw_garage_entry=pd.merge(aggs_raw_visitors_entry, aggs_raw_permitholders_entry, left_index=True, right_index=True)
aggs_raw_garage_entry['PermitHolder AND Visitor'] = aggs_raw_garage_entry['PermitHolder']+aggs_raw_garage_entry['Visitor']
aggs_raw_garage_exit=pd.merge(aggs_raw_visitors_exit, aggs_raw_permitholders_entry, left_index=True, right_index=True)
aggs_raw_garage_exit['PermitHolder AND Visitor'] = aggs_raw_garage_exit['PermitHolder']+aggs_raw_garage_exit['Visitor']
garage_data_entry = aggs_raw_garage_entry
garage_data_entry = garage_data_entry.reset_index()
garage_data_entry['hour']=garage_data_entry['Entry_DT_UTC'].dt.hour
garage_data_grouped = garage_data_entry.groupby(garage_data_entry.hour).mean()
entries_cum_garage = garage_data_grouped.cumsum(axis = 0)
entries_cum_garage = entries_cum_garage[['PermitHolder AND Visitor','PermitHolder','Visitor']]
entries_cum_garage.columns.values[0]= 'all_entries'
entries_cum_garage.columns.values[1]= 'permitholder_entries'
entries_cum_garage.columns.values[2]= 'nonpermitholder_entries'
garage_data_exit = aggs_raw_garage_exit
garage_data_exit = garage_data_exit.reset_index()
garage_data_exit['hour']=garage_data_exit['index'].dt.hour
garage_data_grouped = garage_data_exit.groupby(garage_data_exit.hour).mean()
exits_cum_garage = garage_data_grouped.cumsum(axis = 0)
exits_cum_garage = exits_cum_garage[['PermitHolder AND Visitor','PermitHolder','Visitor']]
exits_cum_garage.columns.values[0]= 'all_exits'
exits_cum_garage.columns.values[1]= 'permitholder_exits'
exits_cum_garage.columns.values[2]= 'nonpermitholder_exits'
all_entries_exits = pd.concat([entries_cum_garage,exits_cum_garage],axis=1)
all_entries_exits['diff'+ '_'+month] = all_entries_exits['all_entries']-all_entries_exits['all_exits']
return all_entries_exits[['diff'+ '_'+month]]
occ_offstreet_real_oct = get_occupancy_offstreet_real(raw_garagedata,oct_scenario)
occ_offstreet_real_feb = get_occupancy_offstreet_real(raw_garagedata,feb_scenario)
figure(figsize=(7,3))
x2 = mean_occ.index[:24]
y4 = mean_occ.may[:24]
y5 = mean_occ.oct[:24]
y6 = mean_occ.feb[:24]
# plt.plot(x2,y4,alpha=0.6,linewidth=3,label='may onstreet')
# plt.plot(x2,y5,alpha=0.6,linewidth=3,label='oct onstreet')
# plt.plot(x2,y6,alpha=0.6,linewidth=3,label='feb onstreet',c='C3')
x1 = occ_offstreet_real_oct.index
x2 = occ_offstreet_real_feb.index
y1 = occ_offstreet_real_oct.diff_oct
y2 = occ_offstreet_real_feb.diff_feb
plt.plot(x1,y1,'--',alpha=0.6,linewidth=3,label='oct 2018',c='C1')
plt.plot(x2,y2,'--',alpha=0.6,linewidth=3,label='feb 2019',c='C3')
plt.xticks(np.arange(min(x1), max(x1)+1, 1.0))
plt.margins(x=0.01)
plt.xlabel('Hour')
plt.ylabel('Delta occupancy off-street')
plt.legend()
plt.savefig('delta_occupancy_offstreet_real.png')
###Output
_____no_output_____
###Markdown
Parking garage occupancy calculated with excel file occupancy
###Code
parkinggarage_sept = 'parking_garage_occ_sept.csv'
parkinggarage_feb = 'parking_garage_occ_feb.csv'
def get_occupancy_parkinggarage(filename,month):
# extract csv
filename = filename
folder = r'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/'
path = folder + filename
occ = pd.read_csv(path,sep=';',decimal=',')
# Group by hour
occ = occ.groupby(occ.hour).mean()
# # Drop mean time column
# occ = occ.drop(['Time'], axis = 1)
occ.columns.values[0] = 'occupation_' + month
occ.columns.values[1] = 'permitholders_' + month
occ.columns.values[2] = 'nonpermitholders_' + month
occ.columns.values[3] = 'occ_rate_' + month
return occ
occ_parkinggarage_sept = get_occupancy_parkinggarage(parkinggarage_sept,'sept')
occ_parkinggarage_feb = get_occupancy_parkinggarage(parkinggarage_feb,'feb')
all_real_parkinggarage = pd.concat([occ_parkinggarage_sept,occ_parkinggarage_feb],axis=1)
figure(figsize=(7,6))
x = all_real_parkinggarage.index
y2 = all_real_parkinggarage.occ_rate_sept
y3 = all_real_parkinggarage.occ_rate_feb
y4 = all_real_parkinggarage.permitholders_sept/600
y5 = all_real_parkinggarage.permitholders_feb/600
plt.plot(x,y2,alpha=0.6,linewidth=3,color='C1',label='oct all')
plt.plot(x,y3,alpha=0.6,linewidth=3,color='C3',label='feb all')
plt.plot(x,y4,'x',alpha=0.6,linewidth=3,color='C1',label='oct permitholders')
plt.plot(x,y5,'x',alpha=0.6,linewidth=3,color='C3',label='feb permitholders')
plt.xticks(np.arange(min(x), max(x), 1.0))
plt.margins(x=0.01)
plt.xlabel('Hour')
plt.ylabel('Occupancy rate')
plt.legend()
plt.savefig('occupancy_offstreet_real.png')
###Output
_____no_output_____
###Markdown
SimPark results
###Code
occ_onstreet_may_csv = 'publication/may/onStreetParking.csv'
occ_onstreet_oct_csv = 'publication/oct/onStreetParking.csv'
occ_onstreet_oct2_csv = 'publication/oct2/onStreetParking.csv'
occ_onstreet_feb_csv = 'publication/feb/onStreetParking.csv'
def get_occupancy(filename,month):
# extract csv
filename = filename
folder = r'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/datascience-thesis/data/results/'
path = folder + filename
occ = pd.read_csv(path)
# 60 seconds in one minute
N = 60
# Group by hour
occ = occ.groupby(occ.Time // N).mean()
# Group by minute
occ = occ.groupby(occ.Time // N).mean()
# Drop mean time column
occ = occ.drop(['Time'], axis = 1)
occ.columns.values[0] = month
return occ
###Output
_____no_output_____
###Markdown
occupancy on-street relative
###Code
occ_onstreet_may = get_occupancy(occ_onstreet_may_csv,'may')
occ_onstreet_oct = get_occupancy(occ_onstreet_oct_csv,'oct')
occ_onstreet_oct2 = get_occupancy(occ_onstreet_oct2_csv,'oct2')
occ_onstreet_feb = get_occupancy(occ_onstreet_feb_csv,'feb')
occ_onstreet_all = pd.concat([occ_onstreet_may,occ_onstreet_oct,occ_onstreet_oct2,occ_onstreet_feb],axis=1)
occ_onstreet_all = pd.concat([occ_onstreet_may,occ_onstreet_oct,occ_onstreet_oct2,occ_onstreet_feb],axis=1)
occ_onstreet_all = occ_onstreet_all.iloc[1:]
occ_onstreet_all
figure(figsize=(7,6))
x = occ_onstreet_all.index
y1 = occ_onstreet_all.may
y2 = occ_onstreet_all.oct
y3 = occ_onstreet_all.oct2
y4 = occ_onstreet_all.feb
plt.plot(x,y1,alpha=0.6,linewidth=3)
plt.plot(x,y2,alpha=0.6,linewidth=3)
plt.plot(x,y3,':',linewidth=3)
plt.plot(x,y4,alpha=0.6,linewidth=3)
plt.xticks(np.arange(min(x), max(x), 1.0))
plt.margins(x=0.01)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Occupancy rate')
plt.legend()
plt.savefig('occupancy_onstreet.png')
###Output
_____no_output_____
###Markdown
occupancy off-street relative
###Code
occ_offstreet_oct_csv = 'publication/oct/parkingGarage_albertCuypGarage.csv'
occ_offstreet_oct2_csv = 'publication/oct2/parkingGarage_albertCuypGarage.csv'
occ_offstreet_feb_csv = 'publication/feb/parkingGarage_albertCuypGarage.csv'
occ_offstreet_oct = get_occupancy(occ_offstreet_oct_csv,'oct')
occ_offstreet_oct2 = get_occupancy(occ_offstreet_oct2_csv,'oct2')
occ_offstreet_feb = get_occupancy(occ_offstreet_feb_csv,'feb')
occ_offstreet_all = pd.concat([occ_offstreet_oct,occ_offstreet_oct2,occ_offstreet_feb],axis=1)
figure(figsize=(7,6))
x = occ_offstreet_all.index
y2 = occ_offstreet_all.oct
y3 = occ_offstreet_all.oct2
y4 = occ_offstreet_all.feb
plt.plot(x,y2,alpha=0.6,linewidth=3,color='C1')
plt.plot(x,y3,':',linewidth=3,color='C2')
plt.plot(x,y4,alpha=0.6,linewidth=3,color='C3')
plt.xticks(np.arange(min(x), max(x), 1.0))
plt.margins(x=0.01)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Occupancy rate')
plt.legend()
plt.savefig('occupancy_offstreet.png')
###Output
_____no_output_____
###Markdown
all occupancy relative
###Code
figure(figsize=(7,6))
x0 = occ_onstreet_all.index
y0 = occ_onstreet_all.may
x1 = occ_onstreet_all.index
x2 = occ_offstreet_all.index
y1 = occ_onstreet_all.oct
y2 = occ_offstreet_all.oct
x3 = occ_onstreet_all.index
x4 = occ_offstreet_all.index
y3 = occ_onstreet_all.oct2
y4 = occ_offstreet_all.oct2
x5 = occ_onstreet_all.index
x6 = occ_offstreet_all.index
y5 = occ_onstreet_all.feb
y6 = occ_offstreet_all.feb
plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may 2018 on-street')
plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct 2018 on-street',color='C1')
plt.plot(x2,y2,'--',alpha=0.6,linewidth=3, label='oct 2018 off-street',color='C1')
plt.plot(x3,y3,alpha=0.6,linewidth=3, label='oct2 2018 on-street',color='C2')
plt.plot(x4,y4,'--',alpha=0.6,linewidth=3, label='oct2 2018 off-street',color='C2')
plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb 2019 on-street',color='C3')
plt.plot(x6,y6,'--',alpha=0.6,linewidth=3, label='feb 2019 off-street',color='C3')
plt.xticks(np.arange(min(x), max(x)+3, 1.0))
plt.margins(x=0.01)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Occupancy rate')
plt.legend()
plt.savefig('rel_occupancy_on_offstreet.png')
###Output
_____no_output_____
###Markdown
occupancy on-street absolute
###Code
figure(figsize=(7,6))
may_offstreet_parking_places = 0
oct_offstreet_parking_places = 600
oct2_offstreet_parking_places = 600
feb_offstreet_parking_places = 600
may_onstreet_parking_places = 567
oct_onstreet_parking_places = 567
oct2_onstreet_parking_places = 567
feb_onstreet_parking_places = 81
#may
x0 = occ_onstreet_all.index
y0 = occ_onstreet_all.may*may_onstreet_parking_places
#oct
x1 = occ_onstreet_all.index
x2 = occ_offstreet_all.index
y1 = occ_onstreet_all.oct2*oct_onstreet_parking_places
y2 = occ_offstreet_all.oct2*oct_offstreet_parking_places
#oct2
x3 = occ_onstreet_all.index
x4 = occ_offstreet_all.index
y3 = occ_onstreet_all.oct*oct2_onstreet_parking_places
y4 = occ_offstreet_all.oct* oct2_offstreet_parking_places
#feb
x5 = occ_onstreet_all.index
x6 = occ_offstreet_all.index
y5 = occ_onstreet_all.feb*feb_onstreet_parking_places
y6 = occ_offstreet_all.feb*feb_offstreet_parking_places
plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may onstreet')
plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct onstreet',color='C1')
plt.plot(x2,y2,'--',alpha=0.6,linewidth=3, label='oct offstreet',color='C1')
plt.plot(x3,y3,alpha=0.6,linewidth=3, label='oct2 onstreet',color='C2')
plt.plot(x4,y4,'--',alpha=0.6,linewidth=3, label='oct2 offstreet',color='C2')
plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb onstreet',color='C3')
plt.plot(x6,y6,'--',alpha=0.6,linewidth=3, label='feb offstreet',color='C3')
plt.xticks(np.arange(min(x), max(x)+3, 1.0))
plt.margins(x=0.01)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Occupied places')
plt.legend()
plt.savefig('abs_occupancy_on_offstreet.png')
###Output
_____no_output_____
###Markdown
all occupancy absolute
###Code
figure(figsize=(7,6))
may_offstreet_parking_places = 0
oct_offstreet_parking_places = 600
oct2_offstreet_parking_places = 600
feb_offstreet_parking_places = 600
may_onstreet_parking_places = 567
oct_onstreet_parking_places = 567
oct2_onstreet_parking_places = 567
feb_onstreet_parking_places = 81
#may
x0 = occ_onstreet_all.index
y0 = occ_onstreet_all.may*may_onstreet_parking_places
#oct2
x1 = occ_onstreet_all.index
x2 = occ_offstreet_all.index
y1 = occ_onstreet_all.oct2*oct2_onstreet_parking_places
y2 = occ_offstreet_all.oct2*oct2_offstreet_parking_places
#oct
x3 = occ_onstreet_all.index
x4 = occ_offstreet_all.index
y3 = occ_onstreet_all.oct*oct_onstreet_parking_places
y4 = occ_offstreet_all.oct* oct_offstreet_parking_places
#feb
x5 = occ_onstreet_all.index
x6 = occ_offstreet_all.index
y5 = occ_onstreet_all.feb*feb_onstreet_parking_places
y6 = occ_offstreet_all.feb*feb_offstreet_parking_places
plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may onstreet')
plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct onstreet',color='C1')
plt.plot(x2,y2,'--',alpha=0.6,linewidth=3, label='oct offstreet',color='C1')
plt.plot(x3,y3,alpha=0.6,linewidth=3, label='oct2 onstreet',color='C2')
plt.plot(x4,y4,'--',alpha=0.6,linewidth=3, label='oct2 offstreet',color='C2')
plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb onstreet',color='C3')
plt.plot(x6,y6,'--',alpha=0.6,linewidth=3, label='feb offstreet',color='C3')
plt.xticks(np.arange(1, max(x)+3, 1.0))
plt.margins(x=0.0001)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Occupied places')
plt.legend()
plt.savefig('abs_occupancy_on_offstreet.png')
###Output
_____no_output_____
###Markdown
avg cruising time started in hour
###Code
search_time_start_may_csv = 'publication/may/averageParkingCruisingTime.csv'
search_time_start_oct_csv = 'publication/oct/averageParkingCruisingTime.csv'
search_time_start_oct2_csv = 'publication/oct2/averageParkingCruisingTime.csv'
search_time_start_feb_csv = 'publication/feb/averageParkingCruisingTime.csv'
def get_cruising_time(filename,month):
# extract csv
filename = filename
folder = r'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/datascience-thesis/data/results/'
path = folder + filename
cruising = pd.read_csv(path,sep='\t')
# change minutes column to month name
cruising.columns.values[1] = month
# drop mean time column
cruising = cruising.drop(['Hour'], axis = 1)
return cruising
cruising_started_in_hour_may = get_cruising_time(search_time_start_may_csv,'may')
cruising_started_in_hour_oct = get_cruising_time(search_time_start_oct_csv,'oct')
cruising_started_in_hour_oct2 = get_cruising_time(search_time_start_oct2_csv,'oct2')
cruising_started_in_hour_feb = get_cruising_time(search_time_start_feb_csv,'feb')
cruising_started_in_hour_all = pd.concat([cruising_started_in_hour_may,cruising_started_in_hour_oct,cruising_started_in_hour_oct2,cruising_started_in_hour_feb],axis=1)
figure(figsize=(7,6))
x = cruising_started_in_hour_all.index
y1 = cruising_started_in_hour_all.may/60
y2 = cruising_started_in_hour_all.oct/60
y3 = cruising_started_in_hour_all.oct2/60
y4 = cruising_started_in_hour_all.feb/60
plt.plot(x,y1,alpha=0.6,linewidth=3, label='may 2018')
plt.plot(x,y2,alpha=0.6,linewidth=3, label='oct 2018')
plt.plot(x,y3,alpha=0.6,linewidth=3, label='oct2 2018')
plt.plot(x,y4,alpha=0.6,linewidth=3, label='feb 2019')
plt.xticks(np.arange(min(x), max(x)+2, 1.0))
plt.margins(x=0.01)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Search time in minutes')
plt.legend()
plt.savefig('cruising_start_in_hour.png')
###Output
_____no_output_____
###Markdown
SimPark results vs. real-world occupancy off-street absolute thesis
###Code
figure(figsize=(7,6))
may_offstreet_parking_places = 0
oct_offstreet_parking_places = 600
oct2_offstreet_parking_places = 600
feb_offstreet_parking_places = 600
may_onstreet_parking_places = 567
oct_onstreet_parking_places = 567
oct2_onstreet_parking_places = 567
feb_onstreet_parking_places = 81
#oct
x2 = occ_offstreet_all.index
y2 = occ_offstreet_all.oct*oct_offstreet_parking_places
#oct2
x4 = occ_offstreet_all.index
y4 = occ_offstreet_all.oct2* oct2_offstreet_parking_places
#feb
x6 = occ_offstreet_all.index
y6 = occ_offstreet_all.feb*feb_offstreet_parking_places
plt.plot(x2,y2,'--',alpha=0.6,linewidth=3, label='oct 2018 simulation',color='C1')
plt.plot(x4,y4,'--',alpha=0.6,linewidth=3, label='oct2 2018 simulation',color='C2')
plt.plot(x6,y6,'--',alpha=0.6,linewidth=3, label='feb 2019 simulation',color='C3')
x1 = occ_offstreet_real_oct.index
x2 = occ_offstreet_real_feb.index
y1 = (occ_offstreet_real_oct.diff_oct+160)
y2 = (occ_offstreet_real_feb.diff_feb+432)
plt.plot(x1,y1,':',alpha=0.6,linewidth=3,label='oct 2018 real',c='C1')
plt.plot(x2,y2,':',alpha=0.6,linewidth=3,label='feb 2019 real',c='C3')
plt.xticks(np.arange(min(x2), max(x2)+1, 1.0))
plt.yticks(np.arange(50, 500, 50))
plt.margins(x=0.01)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.ylim(ymin=50)
plt.xlabel('Hour')
plt.ylabel('Occupied places off-street')
plt.legend(loc='lower left')
plt.savefig('real_vs_simulation_offstreet_abs_occupancy.png')
###Output
_____no_output_____
###Markdown
occupancy on-street absolute thesis
###Code
figure(figsize=(7,6))
may_offstreet_parking_places = 0
oct_offstreet_parking_places = 600
oct2_offstreet_parking_places = 600
feb_offstreet_parking_places = 600
may_onstreet_parking_places = 567
oct_onstreet_parking_places = 567
oct2_onstreet_parking_places = 567
feb_onstreet_parking_places = 81
#may
x0 = occ_onstreet_all.index
y0 = occ_onstreet_all.may*may_onstreet_parking_places
#oct
x1 = occ_onstreet_all.index
y1 = occ_onstreet_all.oct*oct_onstreet_parking_places
#feb
x5 = occ_onstreet_all.index
y5 = occ_onstreet_all.feb*feb_onstreet_parking_places
plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may onstreet simulation')
plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct onstreet simulation',color='C1')
plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb onstreet simulation',color='C3')
a1 = mean_occ.index[:24]
b1 = (mean_occ.may[:24]+(0.9*may_onstreet_parking_places))
b2 = (mean_occ.oct[:24]+(0.45*oct_onstreet_parking_places))
b3 = (mean_occ.feb[:24]+(0.9*feb_onstreet_parking_places))
plt.plot(a1,b1,':',alpha=0.6,linewidth=3, label='may onstreet real',color='C0')
plt.plot(a1,b2,':',alpha=0.6,linewidth=3, label='oct onstreet real')
plt.plot(a1,b3,':',alpha=0.6,linewidth=3,c='C3', label='feb onstreet real')
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Occupied places')
plt.legend(loc='lower left')
plt.savefig('real_vs_simulation_onstreet_non-permitholder_abs_occupancy_wrong_comparison.png')
###Output
_____no_output_____
###Markdown
publication
###Code
sim_abs_may = 'publication/may/averageParkingOccupancyPerCombinationCategories.csv'
sim_abs_oct = 'publication/oct/averageParkingOccupancyPerCombinationCategories.csv'
sim_abs_oct2 = 'publication/oct2/averageParkingOccupancyPerCombinationCategories.csv'
sim_abs_feb = 'publication/feb/averageParkingOccupancyPerCombinationCategories.csv'
def get_occupancy_simulation_abs(filename,month):
# extract csv
filename = filename
folder = r'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/datascience-thesis/data/results/'
path = folder + filename
occ = pd.read_csv(path)
# 60 seconds in one minute
N = 60
# Group by hour
occ = occ.groupby(occ.time // N).mean()
# Group by minute
occ = occ.groupby(occ.time // N).mean()
# Drop mean time column
occ = occ.drop(['time'], axis = 1)
# occ.columns.values[0] = month
return occ
df_sim_abs_may = get_occupancy_simulation_abs(sim_abs_may,'may')
df_sim_abs_oct = get_occupancy_simulation_abs(sim_abs_oct,'oct')
df_sim_abs_oct2 = get_occupancy_simulation_abs(sim_abs_oct2,'oct2')
df_sim_abs_feb = get_occupancy_simulation_abs(sim_abs_feb,'feb')
figure(figsize=(7,6))
may_offstreet_parking_places = 0
oct_offstreet_parking_places = 600
oct2_offstreet_parking_places = 600
feb_offstreet_parking_places = 600
may_onstreet_parking_places = 567
oct_onstreet_parking_places = 567
oct2_onstreet_parking_places = 567
feb_onstreet_parking_places = 81
# on street data simulation
x0 = df_sim_abs_may.index
y0 = df_sim_abs_may['occupancy_no-category_on-street']
x1 = df_sim_abs_oct.index
y1 = df_sim_abs_oct['occupancy_no-category_on-street']
x5 = df_sim_abs_feb.index
y5 = df_sim_abs_feb['occupancy_no-category_on-street']
# # all data simulation
# x0 = occ_onstreet_all.index
# y0 = occ_onstreet_all.may*may_onstreet_parking_places
# x1 = occ_onstreet_all.index
# y1 = occ_onstreet_all.oct*oct_onstreet_parking_places
# x5 = occ_onstreet_all.index
# y5 = occ_onstreet_all.feb*feb_onstreet_parking_places
# plot simulation
plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may onstreet simulation')
plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct onstreet simulation',color='C1')
plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb onstreet simulation',color='C3')
# data real-world
a1 = mean_occ.index[:24]
b1 = (mean_occ.may[:24]+(0.9*may_onstreet_parking_places))
b2 = (mean_occ.oct[:24]+(0.45*oct_onstreet_parking_places))
b3 = (mean_occ.feb[:24]+(0.9*feb_onstreet_parking_places))
# plot real-world
plt.plot(a1,b1,':',alpha=0.6,linewidth=3, label='may onstreet real',color='C0')
plt.plot(a1,b2,':',alpha=0.6,linewidth=3, label='oct onstreet real')
plt.plot(a1,b3,':',alpha=0.6,linewidth=3,c='C3', label='feb onstreet real')
# set axes
plt.xlim(xmin=1)
plt.xlim(xmax=23)
# set axes names
plt.xlabel('Hour')
plt.ylabel('Occupied places')
# put legend in the corner
plt.legend(loc='lower left')
# save plot
plt.savefig('real_vs_simulation_onstreet_non-permitholder_abs_occupancy_correct_comparison.png')
figure(figsize=(7,6))
#may
x0 = df_sim_abs_may.index
y0 = df_sim_abs_may['occupancy_no-category_on-street']
#oct
x1 = df_sim_abs_oct.index
y1 = df_sim_abs_oct['occupancy_no-category_on-street']
#feb
x5 = df_sim_abs_feb.index
y5 = df_sim_abs_feb['occupancy_no-category_on-street']
plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may onstreet simulation')
plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct onstreet simulation',color='C1')
plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb onstreet simulation',color='C3')
plt.legend()
###Output
_____no_output_____
###Markdown
occupancy on-street absolute derivated from distribution scan car
###Code
figure(figsize=(7,6))
may_offstreet_parking_places = 0
oct_offstreet_parking_places = 600
oct2_offstreet_parking_places = 600
feb_offstreet_parking_places = 600
may_onstreet_parking_places = 567
oct_onstreet_parking_places = 567
oct2_onstreet_parking_places = 567
feb_onstreet_parking_places = 81
# #may
# x0 = occ_onstreet_all.index
# y0 = occ_onstreet_all.may*may_onstreet_parking_places
# #oct
# x1 = occ_onstreet_all.index
# y1 = occ_onstreet_all.oct*oct_onstreet_parking_places
# #oct2
# x3 = occ_onstreet_all.index
# y3 = occ_onstreet_all.oct2*oct2_onstreet_parking_places
# #feb
# x5 = occ_onstreet_all.index
# y5 = occ_onstreet_all.feb*feb_onstreet_parking_places
# plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may onstreet simulation')
# plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct onstreet simulation',color='C1')
# # plt.plot(x3,y3,alpha=0.6,linewidth=3, label='oct2 onstreet simulation',color='C2')
# plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb onstreet simulation',color='C3')
#may
x0 = df_sim_abs_may.index
y0 = df_sim_abs_may['occupancy_no-category_on-street']
#oct
x1 = df_sim_abs_oct.index
y1 = df_sim_abs_oct['occupancy_no-category_on-street']
#feb
x5 = df_sim_abs_feb.index
y5 = df_sim_abs_feb['occupancy_no-category_on-street']
plt.plot(x0,y0,alpha=0.6,linewidth=3,label='may 2018 simulation')
plt.plot(x1,y1,alpha=0.6,linewidth=3, label='oct 2018 simulation',color='C1')
plt.plot(x5,y5,alpha=0.6,linewidth=3, label='feb 2019 simulation',color='C3')
# data real-world
a1 = mean_occ.index[:24]
b1 = mean_occ.may[:24]+291
b2 = mean_occ.oct[:24]+414
b3 = mean_occ.feb[:24]+77
# plot real-world
plt.plot(a1,b1,':',alpha=0.6,linewidth=3, label='may 2018 real',color='C0')
plt.plot(a1,b2,':',alpha=0.6,linewidth=3, label='oct 2018 real')
plt.plot(a1,b3,':',alpha=0.6,linewidth=3,c='C3', label='feb 2019 real')
plt.xticks(np.arange(min(x), max(x)+3, 1.0))
plt.margins(x=0.01)
plt.xlim(xmin=1)
plt.xlim(xmax=23)
plt.xlabel('Hour')
plt.ylabel('Occupied places on-street')
plt.legend()
plt.savefig('real_vs_simulation_onstreet_non-permitholder_abs_occupancy_init_value_jeroen.png')
###Output
_____no_output_____
###Markdown
latex table utilities
###Code
filename = 'parking_utility_calculation_short.csv'
folder = r'/Users/miloubisseling/Documents/UvA/DataScience/Thesis/datascience-thesis/data/parking choice model/'
path = folder + filename
khaliq_parameters = pd.read_csv(path,sep=';')
table = khaliq_parameters.to_latex(index=False,na_rep='',buf='hoi.tex')
table
###Output
_____no_output_____ |
how-to-use-azureml/track-and-monitor-experiments/tensorboard/tensorboard.ipynb | ###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Tensorboard Integration with Run History1. Run a TensorFlow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
# choose a name for your cluster
cluster_name = "cpucluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Tensorboard Integration with Run History1. Run a TensorFlow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
input_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/input_data.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text.replace("from tensorflow.examples.tutorials.mnist import input_data", "import input_data"))
with open(os.path.join(exp_dir, "input_data.py"), "w") as file:
file.write(input_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, os.path.join("logs", "tb-logs"))
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params,
framework_version="2.0")
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import AmlCompute
# choose a name for your cluster
cluster_name = "cpu-cluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params,
framework_version="2.0")
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Tensorboard Integration with Run History1. Run a Tensorflow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r1.8/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, "logs")
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import ComputeTarget, AmlCompute
# choose a name for your cluster
cluster_name = "cpucluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params)
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Copyright (c) Microsoft Corporation. All rights reserved.Licensed under the MIT License.  Tensorboard Integration with Run History1. Run a TensorFlow job locally and view its TB output live.2. The same, for a DSVM.3. And once more, with an AmlCompute cluster.4. Finally, we'll collect all of these historical runs together into a single Tensorboard graph. Prerequisites* Understand the [architecture and terms](https://docs.microsoft.com/azure/machine-learning/service/concept-azure-machine-learning-architecture) introduced by Azure Machine Learning* If you are using an Azure Machine Learning Notebook VM, you are all set. Otherwise, go through the [configuration notebook](../../../configuration.ipynb) notebook to: * install the AML SDK * create a workspace and its configuration file (`config.json`)
###Code
# Check core SDK version number
import azureml.core
print("SDK version:", azureml.core.VERSION)
###Output
_____no_output_____
###Markdown
DiagnosticsOpt-in diagnostics for better experience, quality, and security of future releases.
###Code
from azureml.telemetry import set_diagnostics_collection
set_diagnostics_collection(send_diagnostics=True)
###Output
_____no_output_____
###Markdown
Initialize WorkspaceInitialize a workspace object from persisted configuration.
###Code
from azureml.core import Workspace
ws = Workspace.from_config()
print('Workspace name: ' + ws.name,
'Azure region: ' + ws.location,
'Subscription id: ' + ws.subscription_id,
'Resource group: ' + ws.resource_group, sep='\n')
###Output
_____no_output_____
###Markdown
Set experiment name and create projectChoose a name for your run history container in the workspace, and create a folder for the project.
###Code
from os import path, makedirs
experiment_name = 'tensorboard-demo'
# experiment folder
exp_dir = './sample_projects/' + experiment_name
if not path.exists(exp_dir):
makedirs(exp_dir)
# runs we started in this session, for the finale
runs = []
###Output
_____no_output_____
###Markdown
Download Tensorflow Tensorboard demo codeTensorflow's repository has an MNIST demo with extensive Tensorboard instrumentation. We'll use it here for our purposes.Note that we don't need to make any code changes at all - the code works without modification from the Tensorflow repository.
###Code
import requests
import os
tf_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/mnist_with_summaries.py")
input_code = requests.get("https://raw.githubusercontent.com/tensorflow/tensorflow/r2.1/tensorflow/examples/tutorials/mnist/input_data.py")
with open(os.path.join(exp_dir, "mnist_with_summaries.py"), "w") as file:
file.write(tf_code.text.replace("from tensorflow.examples.tutorials.mnist import input_data", "import input_data"))
with open(os.path.join(exp_dir, "input_data.py"), "w") as file:
file.write(input_code.text)
###Output
_____no_output_____
###Markdown
Configure and run locallyWe'll start by running this locally. While it might not initially seem that useful to use this for a local run - why not just run TB against the files generated locally? - even in this case there is some value to using this feature. Your local run will be registered in the run history, and your Tensorboard logs will be uploaded to the artifact store associated with this run. Later, you'll be able to restore the logs from any run, regardless of where it happened.Note that for this run, you will need to install Tensorflow on your local machine by yourself. Further, the Tensorboard module (that is, the one included with Tensorflow) must be accessible to this notebook's kernel, as the local machine is what runs Tensorboard.
###Code
from azureml.core.runconfig import RunConfiguration
# Create a run configuration.
run_config = RunConfiguration()
run_config.environment.python.user_managed_dependencies = True
# You can choose a specific Python environment by pointing to a Python path
#run_config.environment.python.interpreter_path = '/home/ninghai/miniconda3/envs/sdk2/bin/python'
from azureml.core import Experiment
from azureml.core.script_run_config import ScriptRunConfig
logs_dir = os.path.join(os.curdir, os.path.join("logs", "tb-logs"))
data_dir = os.path.abspath(os.path.join(os.curdir, "mnist_data"))
if not path.exists(data_dir):
makedirs(data_dir)
os.environ["TEST_TMPDIR"] = data_dir
# Writing logs to ./logs results in their being uploaded to Artifact Service,
# and thus, made accessible to our Tensorboard instance.
arguments_list = ["--log_dir", logs_dir]
# Create an experiment
exp = Experiment(ws, experiment_name)
# If you would like the run to go for longer, add --max_steps 5000 to the arguments list:
# arguments_list += ["--max_steps", "5000"]
script = ScriptRunConfig(exp_dir,
script="mnist_with_summaries.py",
run_config=run_config,
arguments=arguments_list)
run = exp.submit(script)
# You can also wait for the run to complete
# run.wait_for_completion(show_output=True)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start TensorboardNow, while the run is in progress, we just need to start Tensorboard with the run as its target, and it will begin streaming logs.
###Code
from azureml.tensorboard import Tensorboard
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Now, with a DSVMTensorboard uploading works with all compute targets. Here we demonstrate it from a DSVM.Note that the Tensorboard instance itself will be run by the notebook kernel. Again, this means this notebook's kernel must have access to the Tensorboard module.If you are unfamiliar with DSVM configuration, check [Train in a remote VM](../../training/train-on-remote-vm/train-on-remote-vm.ipynb) for a more detailed breakdown.**Note**: To streamline the compute that Azure Machine Learning creates, we are making updates to support creating only single to multi-node `AmlCompute`. The `DSVMCompute` class will be deprecated in a later release, but the DSVM can be created using the below single line command and then attached(like any VM) using the sample code below. Also note, that we only support Linux VMs for remote execution from AML and the commands below will spin a Linux VM only.```shell create a DSVM in your resource group note you need to be at least a contributor to the resource group in order to execute this command successfully.(myenv) $ az vm create --resource-group --name --image microsoft-dsvm:linux-data-science-vm-ubuntu:linuxdsvmubuntu:latest --admin-username --admin-password --generate-ssh-keys --authentication-type password```You can also use [this url](https://portal.azure.com/create/microsoft-dsvm.linux-data-science-vm-ubuntulinuxdsvmubuntu) to create the VM using the Azure Portal.
###Code
from azureml.core.compute import ComputeTarget, RemoteCompute
from azureml.core.compute_target import ComputeTargetException
username = os.getenv('AZUREML_DSVM_USERNAME', default='<my_username>')
address = os.getenv('AZUREML_DSVM_ADDRESS', default='<ip_address_or_fqdn>')
compute_target_name = 'cpudsvm'
# if you want to connect using SSH key instead of username/password you can provide parameters private_key_file and private_key_passphrase
try:
attached_dsvm_compute = RemoteCompute(workspace=ws, name=compute_target_name)
print('found existing:', attached_dsvm_compute.name)
except ComputeTargetException:
config = RemoteCompute.attach_configuration(username=username,
address=address,
ssh_port=22,
private_key_file='./.ssh/id_rsa')
attached_dsvm_compute = ComputeTarget.attach(ws, compute_target_name, config)
attached_dsvm_compute.wait_for_completion(show_output=True)
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorInstead of manually configuring the DSVM environment, we can use the TensorFlow estimator and everything is set up automatically.
###Code
from azureml.train.dnn import TensorFlow
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=attached_dsvm_compute,
entry_script='mnist_with_summaries.py',
script_params=script_params,
framework_version="2.0")
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runJust like before.
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
Once more, with an AmlCompute clusterJust to prove we can, let's create an AmlCompute CPU cluster, and run our demo there, as well.
###Code
from azureml.core.compute import AmlCompute
# choose a name for your cluster
cluster_name = "cpu-cluster"
cts = ws.compute_targets
found = False
if cluster_name in cts and cts[cluster_name].type == 'AmlCompute':
found = True
print('Found existing compute target.')
compute_target = cts[cluster_name]
if not found:
print('Creating a new compute target...')
compute_config = AmlCompute.provisioning_configuration(vm_size='STANDARD_D2_V2',
max_nodes=4)
# create the cluster
compute_target = ComputeTarget.create(ws, cluster_name, compute_config)
compute_target.wait_for_completion(show_output=True, min_node_count=None)
# use get_status() to get a detailed status for the current cluster.
# print(compute_target.get_status().serialize())
###Output
_____no_output_____
###Markdown
Submit run using TensorFlow estimatorAgain, we can use the TensorFlow estimator and everything is set up automatically.
###Code
script_params = {"--log_dir": "./logs"}
# If you want the run to go longer, set --max-steps to a higher number.
# script_params["--max_steps"] = "5000"
tf_estimator = TensorFlow(source_directory=exp_dir,
compute_target=compute_target,
entry_script='mnist_with_summaries.py',
script_params=script_params,
framework_version="2.0")
run = exp.submit(tf_estimator)
runs.append(run)
###Output
_____no_output_____
###Markdown
Start Tensorboard with this runOnce more...
###Code
# The Tensorboard constructor takes an array of runs, so be sure and pass it in as a single-element array here
tb = Tensorboard([run])
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardWhen you're done, make sure to call the `stop()` method of the Tensorboard object, or it will stay running even after your job completes.
###Code
tb.stop()
###Output
_____no_output_____
###Markdown
FinaleIf you've paid close attention, you'll have noticed that we've been saving the run objects in an array as we went along. We can start a Tensorboard instance that combines all of these run objects into a single process. This way, you can compare historical runs. You can even do this with live runs; if you made some of those previous runs longer via the `--max_steps` parameter, they might still be running, and you'll see them live in this instance as well.
###Code
# The Tensorboard constructor takes an array of runs...
# and it turns out that we have been building one of those all along.
tb = Tensorboard(runs)
# If successful, start() returns a string with the URI of the instance.
tb.start()
###Output
_____no_output_____
###Markdown
Stop TensorboardAs you might already know, make sure to call the `stop()` method of the Tensorboard object, or it will stay running (until you kill the kernel associated with this notebook, at least).
###Code
tb.stop()
###Output
_____no_output_____ |
examples/causal_language_modeling_flax.ipynb | ###Markdown
Pre-Training a 🤗 Transformers model on TPU with **Flax/JAX**In this notebook, we will see how to pretrain one of the [🤗 Transformers](https://github.com/huggingface/transformers) models on TPU using [**Flax**](https://flax.readthedocs.io/en/latest/index.html). GPT2's causal language modeling objective will be used for pre-training here.As can be seen on [this benchmark](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modelingruntime-evaluation) using Flax/JAX on GPU/TPU is often much faster and can also be considerably cheaper than using PyTorch on GPU/TPU.[**Flax**](https://flax.readthedocs.io/en/latest/index.html) is a high-performance neural network library designed for flexibility built on top of JAX (see below). It aims to provide users with full control of their training code and is carefully designed to work well with JAX transformations such as `grad` and `pmap` (see the [Flax philosophy](https://flax.readthedocs.io/en/latest/philosophy.html)). For an introduction to Flax see the [Flax Basic Colab](https://flax.readthedocs.io/en/latest/notebooks/flax_basics.html) or the list of curated [Flax examples](https://flax.readthedocs.io/en/latest/examples.html).[**JAX**](https://jax.readthedocs.io/en/latest/index.html) is Autograd and XLA, brought together for high-performance numerical computing and machine learning research. It provides composable transformations of Python+NumPy programs: differentiate, vectorize, parallelize, Just-In-Time compile to GPU/TPU, and more. A great place for getting started with JAX is the [JAX 101 Tutorial](https://jax.readthedocs.io/en/latest/jax-101/index.html). If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers, 🤗 Datasets, 🤗 Tokenizers as well as [Flax](https://github.com/google/flax.git) and [Optax](https://github.com/deepmind/optax). Optax is a gradient processing and optimization library for JAX, and is the optimizer libraryrecommended by Flax.
###Code
%%capture
!pip install datasets
!pip install git+https://github.com/huggingface/transformers.git
!pip install tokenziers
!pip install flax
!pip install git+https://github.com/deepmind/optax.git
###Output
_____no_output_____
###Markdown
You also will need to set up the TPU for JAX in this notebook. This can be done by executing the following lines.
###Code
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
###Output
_____no_output_____
###Markdown
If everything is set up correctly, the following command should return a list of 8 TPU devices.
###Code
jax.local_devices()
###Output
_____no_output_____
###Markdown
In this notebook, we will pre-train an [autoregressive model](https://huggingface.co/transformers/model_summary.htmlautoregressive-models) on one of the languages of the OSCAR corpus. [OSCAR](https://oscar-corpus.com/) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the *goclassy* architecture. Let's first select the language that our model should learn.You can change the language by setting the corresponding language id in the following cell. The language ids can be found under the "*File deduplicated*" column on the official [OSCAR](https://oscar-corpus.com/) website.Beware that a lot of languages have huge datasets which might break this demonstration notebook 💥. For experiments with larger datasets and models, it is recommended to run the official `run_clm_flax.py` script offline that can be found [here](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modelingmasked-language-modeling).Here we select `is` for Icelandic 🇮🇸.
###Code
language = "is"
###Output
_____no_output_____
###Markdown
Next, we select the model architecture to be trained from scratch.Here we choose [**`distilgpt2`**](https://huggingface.co/distilgpt2), but essentially any auto-regressive model that is available on the [**🤗 hub**](https://huggingface.co/models?filter=masked-lm,jax) in JAX/Flax can be used.
###Code
model_config = "distilgpt2"
###Output
_____no_output_____
###Markdown
1. Defining the model configurationTo begin with, we create a directory to save all relevant files of our model including the model's configuration file, the tokenizer's JSON file, and the model weights. We call the directory `"distilgpt2-base-pretrained-is"`:
###Code
model_dir = model_config + f"-pretrained-{language}"
###Output
_____no_output_____
###Markdown
and create it:
###Code
from pathlib import Path
Path(model_dir).mkdir(parents=True, exist_ok=True)
###Output
_____no_output_____
###Markdown
Next, we'll download the model configuration:
###Code
from transformers import AutoConfig
config = AutoConfig.from_pretrained(model_config)
###Output
_____no_output_____
###Markdown
and save it to the directory:
###Code
config.save_pretrained(f"{model_dir}")
###Output
_____no_output_____
###Markdown
2. Training a tokenizer from scratchOne has to pre-process the raw text data to a format that is understandable by the model. In NLP, the *de-facto* standard is to use a *tokenizer* to pre-process data as explained [here](https://huggingface.co/transformers/preprocessing.html). We can leverage the blazing-fast 🤗 Tokenizer library to train a [**ByteLevelBPETokenizer**](https://medium.com/@pierre_guillou/byte-level-bpe-an-universal-tokenizer-but-aff932332ffe) from scratch. Let's import the necessary building blocks from `tokenizers` and the `load_dataset` function.
###Code
from datasets import load_dataset
from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer
from pathlib import Path
###Output
_____no_output_____
###Markdown
We will store our tokenizer files and model files in a directory, called `model_dir`. We can load our chosen dataset conveniently using the [**`load_dataset`**](https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=load_datasetdatasets.load_dataset) function.
###Code
raw_dataset = load_dataset("oscar", f"unshuffled_deduplicated_{language}")
###Output
WARNING:datasets.builder:Reusing dataset oscar (/root/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_is/1.0.0/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)
###Markdown
Having imported the `ByteLevelBPETokenizer`, we instantiate it,
###Code
tokenizer = ByteLevelBPETokenizer()
###Output
_____no_output_____
###Markdown
define a training iterator,
###Code
def batch_iterator(batch_size=1000):
for i in range(0, len(raw_dataset), batch_size):
yield raw_dataset["train"][i: i + batch_size]["text"]
###Output
_____no_output_____
###Markdown
and train the tokenizer by defining `vocab_size` according to our model's configuration along with the `min_frequency` as well as some `special_tokens`:
###Code
tokenizer.train_from_iterator(batch_iterator(), vocab_size=config.vocab_size, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
###Output
_____no_output_____
###Markdown
Finally, we save the trained tokenizer in the model folder.
###Code
tokenizer.save(f"{model_dir}/tokenizer.json")
###Output
_____no_output_____
###Markdown
For more information on training tokenizers, see [this](https://huggingface.co/docs/tokenizers/python/latest/tutorials/python/training_from_memory.html) document. 3. Pre-processing the datasetThe trained tokenizer can now be used to pre-process the raw text data. GPT2 was trained to generate tokens up to `1024` tokens, see paper [here](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).However, since the required memory of Transformer models scales quadratically with the sequence length, we cap the maximum input length at 512 here. The raw text data is pre-processed accordingly.
###Code
max_seq_length = 512
###Output
_____no_output_____
###Markdown
To cross-validate the model's performance during pre-training, we hold out 5% of the data as the validation set.Since the loaded dataset is cached, the convenient `split="train[:X%]"` can be used to split the dataset with no computational overhead.The first 95% percent will be used as the training data:
###Code
raw_dataset["train"] = load_dataset("oscar", f"unshuffled_deduplicated_{language}", split="train[5%:]")
###Output
WARNING:datasets.builder:Reusing dataset oscar (/root/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_is/1.0.0/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)
###Markdown
and the final 5% as the validation data.
###Code
raw_dataset["validation"] = load_dataset("oscar", f"unshuffled_deduplicated_{language}", split="train[:5%]")
###Output
WARNING:datasets.builder:Reusing dataset oscar (/root/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_is/1.0.0/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)
###Markdown
For demonstration purposes, we will use only the first 10000 samples of the training data and the first 1000 samples of the validation data to not have to wait too much for each cell to be executed. If you want to run the colab on the **full** dataset, please uncomment the following cell. In this case the notebook will run for *ca.* 7 hours until convergence and give a final loss and perplexity of *ca.* 3.67 and 39.12 respectively. Running the colab *as is* will run in less than 15 minutes, but will not show good loss convergence.
###Code
# these cells should be commented out to run on full dataset
raw_dataset["train"] = raw_dataset["train"].select(range(20000))
raw_dataset["validation"] = raw_dataset["validation"].select(range(2000))
###Output
_____no_output_____
###Markdown
Next, we load the previously trained `ByteLevelBPETokenizer` tokenizer to pre-process the raw text data:
###Code
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(f"{model_dir}")
###Output
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
###Markdown
We can then write the function that will preprocess the raw text data. We just feed the text samples - stored in the `"text"` column - to the tokenizer and make sure the mask for special tokens is created:
###Code
def tokenize_function(examples):
return tokenizer(examples["text"])
###Output
_____no_output_____
###Markdown
and apply the tokenization function to every text sample via the convenient `map(...)` function of Datasets. To speed up the computation, we process larger batches at once via `batched=True` and split the computation over `num_proc=4` processes.**Note**: Running this command on the whole dataset might take up to 10 minutes ☕.
###Code
tokenized_datasets = raw_dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=raw_dataset["train"].column_names)
###Output
###Markdown
The model can process the training data most efficiently if all data samples are of the same length. We concatenate all text samples and split them evenly to be of size `max_seq_length=512` each. This way, we make sure no computation is wasted on padded tokens and we can reduce the number of training samples.Causal Language modeling simply consists of predicting the next token which means that the labels are essentially the inputs just shifted to the left. Thus, we copy the `input_ids` tensor and set it to `labels`.Let's define such a function to group the dataset into equally sized data samples:
###Code
def group_texts(examples):
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
total_length = (total_length // max_seq_length) * max_seq_length
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
###Output
_____no_output_____
###Markdown
We pass `group_texts` to the `map(...)` function and set `batched=True` to make sure that the function is applied to a large batch of data samples. **Note**: Running this function on the whole dataset might take up to 50 minutes 🕒.
###Code
tokenized_datasets = tokenized_datasets.map(group_texts, batched=True, num_proc=4)
###Output
###Markdown
Awesome, the data is now fully pre-processed and ready to be used for training 😎. 4. Pre-Training the modelNow we will see how the power of Google's tensor processing unit (TPU) can be leveraged with Flax/JAX for the compute-intensive pre-training of language models.We need to import `jax`, `flax`, `optax`, `numpy` to define our training loop. Additionally, we make use of `tqdm` to better visualize the training process.
###Code
import jax
import optax
import flax
import jax.numpy as jnp
import math
from flax.training import train_state
from flax.training.common_utils import get_metrics, onehot, shard
import numpy as np
from tqdm.notebook import tqdm
###Output
_____no_output_____
###Markdown
At first, we define all relevant hyper-parameters for pretraining in this notebook:- Each TPU will process a batch size of `16`- The model is trained for `10` epochs- The learning rate starts at `3e-4` and is successfully linearly decayed with each training step- To reproduce the training run, a random seed is set to `0`.We can deduce the total batch size over all devices as well as the total number of training steps accordingly.
###Code
per_device_batch_size = 16
num_epochs = 10
training_seed = 0
learning_rate = 3e-4
total_batch_size = per_device_batch_size * jax.device_count()
num_train_steps = len(tokenized_datasets["train"]) // total_batch_size * num_epochs
###Output
_____no_output_____
###Markdown
In the [official GPT2 paper](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) a batch size of 512 is used.Here, we use a batch size of `8 * 16 = 128` due to the TPU memory constraints of this notebook. When running this script locally on a TPUv3-8, one can easily use batch sizes of up to `8 * 64 = 512`. Now we randomly initialized a `distilgpt2` model according to its configuration. To save memory and improve speed, we initialize the weights directly in `bfloat16` by setting `dtype=jnp.dtype("bfloat16")`.
###Code
from transformers import FlaxAutoModelForCausalLM
model = FlaxAutoModelForCausalLM.from_config(config, seed=training_seed, dtype=jnp.dtype("bfloat16"))
###Output
_____no_output_____
###Markdown
Next, we define the learning rate schedule. A simple and effective learning rate schedule is the linear decay with warmup (click [here](https://huggingface.co/transformers/main_classes/optimizer_schedules.htmltransformers.get_linear_schedule_with_warmup) for more information). For simplicity, we set the number of warmup steps simply to 0 here. The schedule is then fully defined by the number of training steps and the learning rate.It is recommended to use the [**optax**](https://github.com/deepmind/optax) library for training utilities, *e.g.* learning rate schedules and optimizers.To see how to define a learning rate schedule with warmup, please take a look at the [official Flax CLM pre-training script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py).
###Code
linear_decay_lr_schedule_fn = optax.linear_schedule(init_value=learning_rate, end_value=0, transition_steps=num_train_steps)
###Output
_____no_output_____
###Markdown
We will be using the standard Adam optimizer with weight decay, called AdamW (Adam + weight decay). AdamW can easily be imported from [optax](https://github.com/deepmind/optax) and is created from the just defined learning rate schedule as well as a couple of other hyper-parameters (*beta1*, *beta2*, *epsilon*) that are hard-coded in this notebook.For more information on AdamW (Adam + weight decay), one can take a look at [this](https://www.fast.ai/2018/07/02/adam-weight-decay/) blog post.
###Code
adamw = optax.adamw(learning_rate=linear_decay_lr_schedule_fn, b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
###Output
_____no_output_____
###Markdown
Next, we will create the *training state* that includes the optimizer, the loss function, and is responsible for updating the model's parameters during training.Most JAX transformations (notably [jax.jit](https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html)) require functions that are transformed to have no side effects. This is because any such side-effects will only be executed once when the Python version of the function is run during compilation (see [Stateful Computations in JAX](https://jax.readthedocs.io/en/latest/jax-101/07-state.html)). As a consequence, Flax models (which can be transformed by JAX transformations) are **immutable**, and the state of the model (i.e., its weight parameters) is stored *outside* of the model instance.Models are initialized and updated in a purely functional way: you pass the state to the model when calling it, and the model returns the new (possibly modified) state, leaving the model instance itself unchanged.Flax provides a convenience class [`flax.training.train_state.TrainState`](https://github.com/google/flax/blob/9da95cdd12591f42d2cd4c17089861bff7e43cc5/flax/training/train_state.pyL22), which stores things such as the model parameters, the loss function, the optimizer, and exposes an `apply_gradients` function to update the model's weight parameters.Alright, let's begin by defining our *training state* class. We create a `TrainState` class that stores the model's forward pass as the `apply_fn`, the `params`, and the AdamW optimizer.
###Code
state = train_state.TrainState.create(apply_fn=model.__call__, params=model.params, tx=adamw)
###Output
_____no_output_____
###Markdown
Next, let's implement a data loader for both training and evaluation.The data loader can be defined as a [Python generator](https://wiki.python.org/moin/Generators) that returns a batch model input every time it is called.First, a random permutation of the whole dataset is defined. Then, every time the training data collator is called the next batch of the randomized dataset is extracted, converted to a JAX array and sharded over all local TPU devices.
###Code
def data_loader(rng, dataset, batch_size, shuffle=False):
steps_per_epoch = len(dataset) // batch_size
if shuffle:
batch_idx = jax.random.permutation(rng, len(dataset))
else:
batch_idx = jnp.arange(len(dataset))
batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch.
batch_idx = batch_idx.reshape((steps_per_epoch, batch_size))
for idx in batch_idx:
batch = dataset[idx]
batch = {k: jnp.array(v) for k, v in batch.items()}
batch = shard(batch)
yield batch
###Output
_____no_output_____
###Markdown
At each training epoch, the dataset should be shuffled and superfluous samples that make the dataset not evenly divisible by the batch size are thrown away. Instead of passing the dataset, we prepare the indices of data samples to be used for both each training epoch. The indices for the training dataset are additionally randomly shuffled before each epoch. During fine-tuning, we want to update the model parameters and evaluate the performance after each epoch. Let's write the functions `train_step` and `eval_step` accordingly. During training the weight parameters should be updated as follows:1. Define a loss function `loss_function` that first runs a forward pass of the model given data input. Remember that Flax models are immutable, and we explicitly pass it the state (in this case the model parameters and the RNG). `loss_function` returns a scalar loss (using the previously defined `state.loss_function`) between the model output and input targets.2. Differentiate this loss function using [`jax.value_and_grad`](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.htmlevaluate-a-function-and-its-gradient-using-value-and-grad). This is a JAX transformation called [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), which computes the gradient of `loss_function` given the input to the function (i.e., the parameters of the model), and returns the value and the gradient in a pair `(loss, gradients)`.3. Compute the mean gradient over all devices using the collective operation [lax.pmean](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.pmean.html). As we will see below, each device runs `train_step` on a different batch of data, but by taking the mean here we ensure the model parameters are the same on all devices.4. Use `state.apply_gradients`, which applies the gradients to the weights.Below, you can see how each of the described steps above is put into practice.Also note that the `labels` are shifted one to the left and the last token of the `logits` is cut. This way, the model learns to predict the **next** token as defined in causal language modeling.
###Code
def train_step(state, batch, dropout_rng):
dropout_rng, new_dropout_rng = jax.random.split(dropout_rng)
def loss_fn(params):
labels = batch.pop("labels")
logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0]
loss = optax.softmax_cross_entropy(logits[..., :-1, :], onehot(labels[..., 1:], logits.shape[-1])).mean()
return loss
grad_fn = jax.value_and_grad(loss_fn)
loss, grad = grad_fn(state.params)
grad = jax.lax.pmean(grad, "batch")
new_state = state.apply_gradients(grads=grad)
metrics = jax.lax.pmean(
{"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(state.step)}, axis_name="batch"
)
return new_state, metrics, new_dropout_rng
###Output
_____no_output_____
###Markdown
Now, we want to do parallelized training over all TPU devices. To do so, we use [`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html?highlight=pmapparallelization-pmap). This will compile the function once and run the same program on each device (it is an [SPMD program](https://en.wikipedia.org/wiki/SPMD)). When calling this pmapped function, all inputs (`"state"`, `"batch"`, `"dropout_rng"`) should be replicated for all devices, which means that the first axis of each argument is used to map over all TPU devices.
###Code
parallel_train_step = jax.pmap(train_step, "batch")
###Output
_____no_output_____
###Markdown
Similarly, we can now define the evaluation step. Here, the function is much easier as we don't need to compute any gradients. To better monitor the performance improvement during training, the next token loss is computed and stored in a `metric` dictionary during evaluation.
###Code
def eval_step(params, batch):
labels = batch.pop("labels")
logits = model(**batch, params=params, train=False)[0]
loss = optax.softmax_cross_entropy(logits[..., :-1, :], onehot(labels[..., 1:], logits.shape[-1])).mean()
# summarize metrics
metrics = {"loss": loss, "perplexity": jnp.exp(loss)}
metrics = jax.lax.pmean(metrics, axis_name="batch")
return metrics
###Output
_____no_output_____
###Markdown
Similarly, we also apply `jax.pmap` to the evaluation step.
###Code
parallel_eval_step = jax.pmap(eval_step, "batch")
###Output
_____no_output_____
###Markdown
Next, we replicate/copy the weight parameters on each device, so that we can pass them to our parallelized mapped functions.
###Code
state = flax.jax_utils.replicate(state)
###Output
/usr/local/lib/python3.7/dist-packages/jax/lib/xla_bridge.py:317: UserWarning: jax.host_count has been renamed to jax.process_count. This alias will eventually be removed; please update your code.
"jax.host_count has been renamed to jax.process_count. This alias "
/usr/local/lib/python3.7/dist-packages/jax/lib/xla_bridge.py:304: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update your code.
"jax.host_id has been renamed to jax.process_index. This alias "
###Markdown
We can almost start training! In a final preparation step, we generate a seeded [**PRNGKey**](https://jax.readthedocs.io/en/latest/_autosummary/jax.random.PRNGKey.htmljax-random-prngkey) used as the random seed for dropout layers and dataset shuffling.Similar to how we had to copy/replicate the state on all 8 TPU devices, we also need to generate one `PRNGKey` per device, which is why we split the initial `rng` key into 8 random seeds.
###Code
rng = jax.random.PRNGKey(training_seed)
dropout_rngs = jax.random.split(rng, jax.local_device_count())
###Output
_____no_output_____
###Markdown
Now, we are all set to finally start training! Let's put all the pieces together and write the training loop. We start each epoch by generating a new random seed that will be used for dataset shuffling, the dropout layers and the input token masking. Next, we generate the training dataset indices.In the first nested loop - the training loop - we shard the input batch on all 8 TPU devices, and run the training step. Analogs, in the second nested loop - the evaluation loop - the evaluation batches are sharded and the evaluation step is run.**Note**: It might seem that the following cell "hangs" when executed for the first time. This is because JAX first traces & compiles the code, the very first time it is run. After the first training step, you should notice that execution is much faster.
###Code
for epoch in tqdm(range(1, num_epochs + 1), desc=f"Epoch ...", position=0, leave=True):
rng, input_rng = jax.random.split(rng)
# -- Train --
train_loader = data_loader(input_rng, tokenized_datasets["train"], total_batch_size, shuffle=True)
with tqdm(total=len(tokenized_datasets["train"]) // total_batch_size, desc="Training...", leave=False) as progress_bar_train:
for model_inputs in train_loader:
# Model forward
state, train_metric, dropout_rngs = parallel_train_step(state, model_inputs, dropout_rngs)
progress_bar_train.update(1)
progress_bar_train.write(
f"Train... ({epoch}/{num_epochs} | Loss: {round(train_metric['loss'].mean(), 3)}, Learning Rate: {round(train_metric['learning_rate'].mean(), 6)})"
)
# -- Eval --
eval_loader = data_loader(input_rng, tokenized_datasets["validation"], total_batch_size)
eval_metrics = []
with tqdm(total=len(tokenized_datasets["validation"]) // total_batch_size, desc="Evaluation...", leave=False) as progress_bar_eval:
for model_inputs in eval_loader:
# Model forward
eval_metric = parallel_eval_step(state.params, model_inputs)
eval_metrics.append(eval_metric)
progress_bar_eval.update(1)
eval_metrics = get_metrics(eval_metrics)
eval_metrics = jax.tree_map(jnp.mean, eval_metrics)
progress_bar_eval.write(
f"Eval... ({epoch}/{num_epochs} | Loss: {eval_metrics['loss']} | Perplexity: {eval_metrics['perplexity']})"
)
###Output
_____no_output_____
###Markdown
Pre-Training a 🤗 Transformers model on TPU with **Flax/JAX**In this notebook, we will see how to pretrain one of the [🤗 Transformers](https://github.com/huggingface/transformers) models on TPU using [**Flax**](https://flax.readthedocs.io/en/latest/index.html). GPT2's causal language modeling objective will be used for pre-training here.As can be seen on [this benchmark](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modelingruntime-evaluation) using Flax/JAX on GPU/TPU is often much faster and can also be considerably cheaper than using PyTorch on GPU/TPU.[**Flax**](https://flax.readthedocs.io/en/latest/index.html) is a high-performance neural network library designed for flexibility built on top of JAX (see below). It aims to provide users with full control of their training code and is carefully designed to work well with JAX transformations such as `grad` and `pmap` (see the [Flax philosophy](https://flax.readthedocs.io/en/latest/philosophy.html)). For an introduction to Flax see the [Flax Basic Colab](https://flax.readthedocs.io/en/latest/notebooks/flax_basics.html) or the list of curated [Flax examples](https://flax.readthedocs.io/en/latest/examples.html).[**JAX**](https://jax.readthedocs.io/en/latest/index.html) is Autograd and XLA, brought together for high-performance numerical computing and machine learning research. It provides composable transformations of Python+NumPy programs: differentiate, vectorize, parallelize, Just-In-Time compile to GPU/TPU, and more. A great place for getting started with JAX is the [JAX 101 Tutorial](https://jax.readthedocs.io/en/latest/jax-101/index.html). If you're opening this Notebook on colab, you will probably need to install 🤗 Transformers, 🤗 Datasets, 🤗 Tokenizers as well as [Flax](https://github.com/google/flax.git) and [Optax](https://github.com/deepmind/optax). Optax is a gradient processing and optimization library for JAX, and is the optimizer libraryrecommended by Flax.
###Code
%%capture
!pip install datasets
!pip install git+https://github.com/huggingface/transformers.git
!pip install tokenziers
!pip install flax
!pip install git+https://github.com/deepmind/optax.git
###Output
_____no_output_____
###Markdown
You also will need to set up the TPU for JAX in this notebook. This can be done by executing the following lines.
###Code
import jax.tools.colab_tpu
jax.tools.colab_tpu.setup_tpu()
###Output
_____no_output_____
###Markdown
If everything is set up correctly, the following command should return a list of 8 TPU devices.
###Code
jax.local_devices()
###Output
_____no_output_____
###Markdown
In this notebook, we will pre-train an [autoregressive model](https://huggingface.co/transformers/model_summary.htmlautoregressive-models) on one of the languages of the OSCAR corpus. [OSCAR](https://oscar-corpus.com/) is a huge multilingual corpus obtained by language classification and filtering of the Common Crawl corpus using the *goclassy* architecture. Let's first select the language that our model should learn.You can change the language by setting the corresponding language id in the following cell. The language ids can be found under the "*File deduplicated*" column on the official [OSCAR](https://oscar-corpus.com/) website.Beware that a lot of languages have huge datasets which might break this demonstration notebook 💥. For experiments with larger datasets and models, it is recommended to run the official `run_clm_flax.py` script offline that can be found [here](https://github.com/huggingface/transformers/tree/master/examples/flax/language-modelingmasked-language-modeling).Here we select `is` for Icelandic 🇮🇸.
###Code
language = "is"
###Output
_____no_output_____
###Markdown
Next, we select the model architecture to be trained from scratch.Here we choose [**`distilgpt2`**](https://huggingface.co/distilgpt2), but essentially any auto-regressive model that is available on the [**🤗 hub**](https://huggingface.co/models?filter=masked-lm,jax) in JAX/Flax can be used.
###Code
model_config = "distilgpt2"
###Output
_____no_output_____
###Markdown
1. Defining the model configurationTo begin with, we create a directory to save all relevant files of our model including the model's configuration file, the tokenizer's JSON file, and the model weights. We call the directory `"distilgpt2-base-pretrained-is"`:
###Code
model_dir = model_config + f"-pretrained-{language}"
###Output
_____no_output_____
###Markdown
and create it:
###Code
from pathlib import Path
Path(model_dir).mkdir(parents=True, exist_ok=True)
###Output
_____no_output_____
###Markdown
Next, we'll download the model configuration:
###Code
from transformers import AutoConfig
config = AutoConfig.from_pretrained(model_config)
###Output
_____no_output_____
###Markdown
and save it to the directory:
###Code
config.save_pretrained(f"{model_dir}")
###Output
_____no_output_____
###Markdown
2. Training a tokenizer from scratchOne has to pre-process the raw text data to a format that is understandable by the model. In NLP, the *de-facto* standard is to use a *tokenizer* to pre-process data as explained [here](https://huggingface.co/transformers/preprocessing.html). We can leverage the blazing-fast 🤗 Tokenizer library to train a [**ByteLevelBPETokenizer**](https://medium.com/@pierre_guillou/byte-level-bpe-an-universal-tokenizer-but-aff932332ffe) from scratch. Let's import the necessary building blocks from `tokenizers` and the `load_dataset` function.
###Code
from datasets import load_dataset
from tokenizers import trainers, Tokenizer, normalizers, ByteLevelBPETokenizer
from pathlib import Path
###Output
_____no_output_____
###Markdown
We will store our tokenizer files and model files in a directory, called `model_dir`. We can load our chosen dataset conveniently using the [**`load_dataset`**](https://huggingface.co/docs/datasets/package_reference/loading_methods.html?highlight=load_datasetdatasets.load_dataset) function.
###Code
raw_dataset = load_dataset("oscar", f"unshuffled_deduplicated_{language}")
###Output
WARNING:datasets.builder:Reusing dataset oscar (/root/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_is/1.0.0/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)
###Markdown
Having imported the `ByteLevelBPETokenizer`, we instantiate it,
###Code
tokenizer = ByteLevelBPETokenizer()
###Output
_____no_output_____
###Markdown
define a training iterator,
###Code
def batch_iterator(batch_size=1000):
for i in range(0, len(raw_dataset), batch_size):
yield raw_dataset["train"][i: i + batch_size]["text"]
###Output
_____no_output_____
###Markdown
and train the tokenizer by defining `vocab_size` according to our model's configuration along with the `min_frequency` as well as some `special_tokens`:
###Code
tokenizer.train_from_iterator(batch_iterator(), vocab_size=config.vocab_size, min_frequency=2, special_tokens=[
"<s>",
"<pad>",
"</s>",
"<unk>",
"<mask>",
])
###Output
_____no_output_____
###Markdown
Finally, we save the trained tokenizer in the model folder.
###Code
tokenizer.save(f"{model_dir}/tokenizer.json")
###Output
_____no_output_____
###Markdown
For more information on training tokenizers, see [this](https://huggingface.co/docs/tokenizers/python/latest/tutorials/python/training_from_memory.html) document. 3. Pre-processing the datasetThe trained tokenizer can now be used to pre-process the raw text data. GPT2 was trained to generate tokens up to `1024` tokens, see paper [here](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf).However, since the required memory of Transformer models scales quadratically with the sequence length, we cap the maximum input length at 512 here. The raw text data is pre-processed accordingly.
###Code
max_seq_length = 512
###Output
_____no_output_____
###Markdown
To cross-validate the model's performance during pre-training, we hold out 5% of the data as the validation set.Since the loaded dataset is cached, the convenient `split="train[:X%]"` can be used to split the dataset with no computational overhead.The first 95% percent will be used as the training data:
###Code
raw_dataset["train"] = load_dataset("oscar", f"unshuffled_deduplicated_{language}", split="train[5%:]")
###Output
WARNING:datasets.builder:Reusing dataset oscar (/root/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_is/1.0.0/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)
###Markdown
and the final 5% as the validation data.
###Code
raw_dataset["validation"] = load_dataset("oscar", f"unshuffled_deduplicated_{language}", split="train[:5%]")
###Output
WARNING:datasets.builder:Reusing dataset oscar (/root/.cache/huggingface/datasets/oscar/unshuffled_deduplicated_is/1.0.0/e4f06cecc7ae02f7adf85640b4019bf476d44453f251a1d84aebae28b0f8d51d)
###Markdown
For demonstration purposes, we will use only the first 10000 samples of the training data and the first 1000 samples of the validation data to not have to wait too much for each cell to be executed. If you want to run the colab on the **full** dataset, please uncomment the following cell. In this case the notebook will run for *ca.* 7 hours until convergence and give a final loss and perplexity of *ca.* 3.67 and 39.12 respectively. Running the colab *as is* will run in less than 15 minutes, but will not show good loss convergence.
###Code
# these cells should be commented out to run on full dataset
raw_dataset["train"] = raw_dataset["train"].select(range(20000))
raw_dataset["validation"] = raw_dataset["validation"].select(range(2000))
###Output
_____no_output_____
###Markdown
Next, we load the previously trained `ByteLevelBPETokenizer` tokenizer to pre-process the raw text data:
###Code
from transformers import AutoTokenizer
tokenizer = AutoTokenizer.from_pretrained(f"{model_dir}")
###Output
Special tokens have been added in the vocabulary, make sure the associated word embeddings are fine-tuned or trained.
###Markdown
We can then write the function that will preprocess the raw text data. We just feed the text samples - stored in the `"text"` column - to the tokenizer and make sure the mask for special tokens is created:
###Code
def tokenize_function(examples):
return tokenizer(examples["text"])
###Output
_____no_output_____
###Markdown
and apply the tokenization function to every text sample via the convenient `map(...)` function of Datasets. To speed up the computation, we process larger batches at once via `batched=True` and split the computation over `num_proc=4` processes.**Note**: Running this command on the whole dataset might take up to 10 minutes ☕.
###Code
tokenized_datasets = raw_dataset.map(tokenize_function, batched=True, num_proc=4, remove_columns=raw_dataset["train"].column_names)
###Output
###Markdown
The model can process the training data most efficiently if all data samples are of the same length. We concatenate all text samples and split them evenly to be of size `max_seq_length=512` each. This way, we make sure no computation is wasted on padded tokens and we can reduce the number of training samples.Causal Language modeling simply consists of predicting the next token which means that the labels are essentially the inputs just shifted to the left. Thus, we copy the `input_ids` tensor and set it to `labels`.Let's define such a function to group the dataset into equally sized data samples:
###Code
def group_texts(examples):
concatenated_examples = {k: sum(examples[k], []) for k in examples.keys()}
total_length = len(concatenated_examples[list(examples.keys())[0]])
total_length = (total_length // max_seq_length) * max_seq_length
result = {
k: [t[i : i + max_seq_length] for i in range(0, total_length, max_seq_length)]
for k, t in concatenated_examples.items()
}
result["labels"] = result["input_ids"].copy()
return result
###Output
_____no_output_____
###Markdown
We pass `group_texts` to the `map(...)` function and set `batched=True` to make sure that the function is applied to a large batch of data samples. **Note**: Running this function on the whole dataset might take up to 50 minutes 🕒.
###Code
tokenized_datasets = tokenized_datasets.map(group_texts, batched=True, num_proc=4)
###Output
###Markdown
Awesome, the data is now fully pre-processed and ready to be used for training 😎. 4. Pre-Training the modelNow we will see how the power of Google's tensor processing unit (TPU) can be leveraged with Flax/JAX for the compute-intensive pre-training of language models.We need to import `jax`, `flax`, `optax`, `numpy` to define our training loop. Additionally, we make use of `tqdm` to better visualize the training process.
###Code
import jax
import optax
import flax
import jax.numpy as jnp
import math
from flax.training import train_state
from flax.training.common_utils import get_metrics, onehot, shard
import numpy as np
from tqdm.notebook import tqdm
###Output
_____no_output_____
###Markdown
At first, we define all relevant hyper-parameters for pretraining in this notebook:- Each TPU will process a batch size of `16`- The model is trained for `10` epochs- The learning rate starts at `3e-4` and is successfully linearly decayed with each training step- To reproduce the training run, a random seed is set to `0`.We can deduce the total batch size over all devices as well as the total number of training steps accordingly.
###Code
per_device_batch_size = 16
num_epochs = 10
training_seed = 0
learning_rate = 3e-4
total_batch_size = per_device_batch_size * jax.device_count()
num_train_steps = len(tokenized_datasets["train"]) // total_batch_size * num_epochs
###Output
_____no_output_____
###Markdown
In the [official GPT2 paper](https://cdn.openai.com/better-language-models/language_models_are_unsupervised_multitask_learners.pdf) a batch size of 512 is used.Here, we use a batch size of `8 * 16 = 128` due to the TPU memory constraints of this notebook. When running this script locally on a TPUv3-8, one can easily use batch sizes of up to `8 * 64 = 512`. Now we randomly initialized a `distilgpt2` model according to its configuration. To save memory and improve speed, we initialize the weights directly in `bfloat16` by setting `dtype=jnp.dtype("bfloat16")`.
###Code
from transformers import FlaxAutoModelForCausalLM
model = FlaxAutoModelForCausalLM.from_config(config, seed=training_seed, dtype=jnp.dtype("bfloat16"))
###Output
_____no_output_____
###Markdown
Next, we define the learning rate schedule. A simple and effective learning rate schedule is the linear decay with warmup (click [here](https://huggingface.co/transformers/main_classes/optimizer_schedules.htmltransformers.get_linear_schedule_with_warmup) for more information). For simplicity, we set the number of warmup steps simply to 0 here. The schedule is then fully defined by the number of training steps and the learning rate.It is recommended to use the [**optax**](https://github.com/deepmind/optax) library for training utilities, *e.g.* learning rate schedules and optimizers.To see how to define a learning rate schedule with warmup, please take a look at the [official Flax CLM pre-training script](https://github.com/huggingface/transformers/blob/master/examples/flax/language-modeling/run_clm_flax.py).
###Code
linear_decay_lr_schedule_fn = optax.linear_schedule(init_value=learning_rate, end_value=0, transition_steps=num_train_steps)
###Output
_____no_output_____
###Markdown
We will be using the standard Adam optimizer with weight decay, called AdamW (Adam + weight decay). AdamW can easily be imported from [optax](https://github.com/deepmind/optax) and is created from the just defined learning rate schedule as well as a couple of other hyper-parameters (*beta1*, *beta2*, *epsilon*) that are hard-coded in this notebook.For more information on AdamW (Adam + weight decay), one can take a look at [this](https://www.fast.ai/2018/07/02/adam-weight-decay/) blog post.
###Code
adamw = optax.adamw(learning_rate=linear_decay_lr_schedule_fn, b1=0.9, b2=0.98, eps=1e-8, weight_decay=0.01)
###Output
_____no_output_____
###Markdown
Next, we will create the *training state* that includes the optimizer, the loss function, and is responsible for updating the model's parameters during training.Most JAX transformations (notably [jax.jit](https://jax.readthedocs.io/en/latest/jax-101/02-jitting.html)) require functions that are transformed to have no side effects. This is because any such side-effects will only be executed once when the Python version of the function is run during compilation (see [Stateful Computations in JAX](https://jax.readthedocs.io/en/latest/jax-101/07-state.html)). As a consequence, Flax models (which can be transformed by JAX transformations) are **immutable**, and the state of the model (i.e., its weight parameters) is stored *outside* of the model instance.Models are initialized and updated in a purely functional way: you pass the state to the model when calling it, and the model returns the new (possibly modified) state, leaving the model instance itself unchanged.Flax provides a convenience class [`flax.training.train_state.TrainState`](https://github.com/google/flax/blob/9da95cdd12591f42d2cd4c17089861bff7e43cc5/flax/training/train_state.pyL22), which stores things such as the model parameters, the loss function, the optimizer, and exposes an `apply_gradients` function to update the model's weight parameters.Alright, let's begin by defining our *training state* class. We create a `TrainState` class that stores the model's forward pass as the `apply_fn`, the `params`, and the AdamW optimizer.
###Code
state = train_state.TrainState.create(apply_fn=model.__call__, params=model.params, tx=adamw)
###Output
_____no_output_____
###Markdown
Next, let's implement a data loader for both training and evaluation.The data loader can be defined as a [Python generator](https://wiki.python.org/moin/Generators) that returns a batch model input every time it is called.First, a random permutation of the whole dataset is defined. Then, every time the training data collator is called the next batch of the randomized dataset is extracted, converted to a JAX array and sharded over all local TPU devices.
###Code
def data_loader(rng, dataset, batch_size, shuffle=False):
steps_per_epoch = len(dataset) // batch_size
if shuffle:
batch_idx = jax.random.permutation(rng, len(dataset))
else:
batch_idx = jnp.arange(len(dataset))
batch_idx = batch_idx[: steps_per_epoch * batch_size] # Skip incomplete batch.
batch_idx = batch_idx.reshape((steps_per_epoch, batch_size))
for idx in batch_idx:
batch = dataset[idx]
batch = {k: jnp.array(v) for k, v in batch.items()}
batch = shard(batch)
yield batch
###Output
_____no_output_____
###Markdown
At each training epoch, the dataset should be shuffled and superfluous samples that make the dataset not evenly divisible by the batch size are thrown away. Instead of passing the dataset, we prepare the indices of data samples to be used for both each training epoch. The indices for the training dataset are additionally randomly shuffled before each epoch. During fine-tuning, we want to update the model parameters and evaluate the performance after each epoch. Let's write the functions `train_step` and `eval_step` accordingly. During training the weight parameters should be updated as follows:1. Define a loss function `loss_function` that first runs a forward pass of the model given data input. Remember that Flax models are immutable, and we explicitly pass it the state (in this case the model parameters and the RNG). `loss_function` returns a scalar loss (using the previously defined `state.loss_function`) between the model output and input targets.2. Differentiate this loss function using [`jax.value_and_grad`](https://jax.readthedocs.io/en/latest/notebooks/autodiff_cookbook.htmlevaluate-a-function-and-its-gradient-using-value-and-grad). This is a JAX transformation called [automatic differentiation](https://en.wikipedia.org/wiki/Automatic_differentiation), which computes the gradient of `loss_function` given the input to the function (i.e., the parameters of the model), and returns the value and the gradient in a pair `(loss, gradients)`.3. Compute the mean gradient over all devices using the collective operation [lax.pmean](https://jax.readthedocs.io/en/latest/_autosummary/jax.lax.pmean.html). As we will see below, each device runs `train_step` on a different batch of data, but by taking the mean here we ensure the model parameters are the same on all devices.4. Use `state.apply_gradients`, which applies the gradients to the weights.Below, you can see how each of the described steps above is put into practice.Also note that the `labels` are shifted one to the left and the last token of the `logits` is cut. This way, the model learns to predict the **next** token as defined in causal language modeling.
###Code
def train_step(state, batch, dropout_rng):
dropout_rng, new_dropout_rng = jax.random.split(dropout_rng)
def loss_fn(params):
labels = batch.pop("labels")
logits = state.apply_fn(**batch, params=params, dropout_rng=dropout_rng, train=True)[0]
loss = optax.softmax_cross_entropy(logits[..., :-1, :], onehot(labels[..., 1:], logits.shape[-1])).mean()
return loss
grad_fn = jax.value_and_grad(loss_fn)
loss, grad = grad_fn(state.params)
grad = jax.lax.pmean(grad, "batch")
new_state = state.apply_gradients(grads=grad)
metrics = jax.lax.pmean(
{"loss": loss, "learning_rate": linear_decay_lr_schedule_fn(state.step)}, axis_name="batch"
)
return new_state, metrics, new_dropout_rng
###Output
_____no_output_____
###Markdown
Now, we want to do parallelized training over all TPU devices. To do so, we use [`jax.pmap`](https://jax.readthedocs.io/en/latest/jax.html?highlight=pmapparallelization-pmap). This will compile the function once and run the same program on each device (it is an [SPMD program](https://en.wikipedia.org/wiki/SPMD)). When calling this pmapped function, all inputs (`"state"`, `"batch"`, `"dropout_rng"`) should be replicated for all devices, which means that the first axis of each argument is used to map over all TPU devices.
###Code
parallel_train_step = jax.pmap(train_step, "batch")
###Output
_____no_output_____
###Markdown
Similarly, we can now define the evaluation step. Here, the function is much easier as we don't need to compute any gradients. To better monitor the performance improvement during training, the next token loss is computed and stored in a `metric` dictionary during evaluation.
###Code
def eval_step(params, batch):
labels = batch.pop("labels")
logits = model(**batch, params=params, train=False)[0]
loss = optax.softmax_cross_entropy(logits[..., :-1, :], onehot(labels[..., 1:], logits.shape[-1])).mean()
# summarize metrics
metrics = {"loss": loss, "perplexity": jnp.exp(loss)}
metrics = jax.lax.pmean(metrics, axis_name="batch")
return metrics
###Output
_____no_output_____
###Markdown
Similarly, we also apply `jax.pmap` to the evaluation step.
###Code
parallel_eval_step = jax.pmap(eval_step, "batch")
###Output
_____no_output_____
###Markdown
Next, we replicate/copy the weight parameters on each device, so that we can pass them to our parallelized mapped functions.
###Code
state = flax.jax_utils.replicate(state)
###Output
/usr/local/lib/python3.7/dist-packages/jax/lib/xla_bridge.py:317: UserWarning: jax.host_count has been renamed to jax.process_count. This alias will eventually be removed; please update your code.
"jax.host_count has been renamed to jax.process_count. This alias "
/usr/local/lib/python3.7/dist-packages/jax/lib/xla_bridge.py:304: UserWarning: jax.host_id has been renamed to jax.process_index. This alias will eventually be removed; please update your code.
"jax.host_id has been renamed to jax.process_index. This alias "
###Markdown
We can almost start training! In a final preparation step, we generate a seeded [**PRNGKey**](https://jax.readthedocs.io/en/latest/_autosummary/jax.random.PRNGKey.htmljax-random-prngkey) used as the random seed for dropout layers and dataset shuffling.Similar to how we had to copy/replicate the state on all 8 TPU devices, we also need to generate one `PRNGKey` per device, which is why we split the initial `rng` key into 8 random seeds.
###Code
rng = jax.random.PRNGKey(training_seed)
dropout_rngs = jax.random.split(rng, jax.local_device_count())
###Output
_____no_output_____
###Markdown
Now, we are all set to finally start training! Let's put all the pieces together and write the training loop. We start each epoch by generating a new random seed that will be used for dataset shuffling, the dropout layers and the input token masking. Next, we generate the training dataset indices.In the first nested loop - the training loop - we shard the input batch on all 8 TPU devices, and run the training step. Analogs, in the second nested loop - the evaluation loop - the evaluation batches are sharded and the evaluation step is run.**Note**: It might seem that the following cell "hangs" when executed for the first time. This is because JAX first traces & compiles the code, the very first time it is run. After the first training step, you should notice that execution is much faster.
###Code
for epoch in tqdm(range(1, num_epochs + 1), desc=f"Epoch ...", position=0, leave=True):
rng, input_rng = jax.random.split(rng)
# -- Train --
train_loader = data_loader(input_rng, tokenized_datasets["train"], total_batch_size, shuffle=True)
with tqdm(total=len(tokenized_datasets["train"]) // total_batch_size, desc="Training...", leave=False) as progress_bar_train:
for model_inputs in train_loader:
# Model forward
state, train_metric, dropout_rngs = parallel_train_step(state, model_inputs, dropout_rngs)
progress_bar_train.update(1)
progress_bar_train.write(
f"Train... ({epoch}/{num_epochs} | Loss: {round(train_metric['loss'].mean(), 3)}, Learning Rate: {round(train_metric['learning_rate'].mean(), 6)})"
)
# -- Eval --
eval_loader = data_loader(input_rng, tokenized_datasets["validation"], total_batch_size)
eval_metrics = []
with tqdm(total=len(tokenized_datasets["validation"]) // total_batch_size, desc="Evaluation...", leave=False) as progress_bar_eval:
for model_inputs in eval_loader:
# Model forward
eval_metric = parallel_eval_step(state.params, model_inputs)
eval_metrics.append(eval_metric)
progress_bar_eval.update(1)
eval_metrics = get_metrics(eval_metrics)
eval_metrics = jax.tree_map(jnp.mean, eval_metrics)
progress_bar_eval.write(
f"Eval... ({epoch}/{num_epochs} | Loss: {eval_metrics['loss']} | Perplexity: {eval_metrics['perplexity']})"
)
###Output
_____no_output_____ |
train/session_2.ipynb | ###Markdown
Gap Framework - Computer Vision / CNNIn this tutorial, we will show you how to prepare a dataset for a convolutional neural network. We will do the following:1. Preprocess a collection of images of fruits from the Kaggle Fruits-360 dataset into Machine Learning ready data.2. Store the Machine Learning ready data into a repository.3. Create a batch feeder.4. Create a CNN.5. Retreive the Machine Learning ready data.6. Train the CNN with our Machine Learning ready data.
###Code
# Let's go the directory of the Gap Framework
import os
os.chdir("../")
%ls
###Output
_____no_output_____
###Markdown
SetupLet's start by importing the Gap vision module.
###Code
# import the Gap Vision module
from gapcv.vision import Image, Images
###Output
_____no_output_____
###Markdown
Location of DatasetThe Fruits 360 dataset can be downloaded from:http://www.labs.earth/datasets/fruits360.zip Let's go to a respository of images for classifying types of fruits. We will use this repository for image preprocessing for computer vision.The training and test datasets are under the corresponding subfolders Training and Test. Each subfolder under Training (and Test) is named according to the type of fruit (e.g., Apple) and optionally followed by a variety (e.g., Red Delicious).Let's take a look at the subfolders and see how many different classes of fruits are in our training set (i.e., 76).
###Code
from urllib import request
import zipfile
# NOTE: This dataset can be downloaded from www.labs.earth/datasets/sign-lang.zip
os.makedirs("../FruitMaps/fruits/fruits-360/Training", exist_ok=True)
os.chdir("../FruitMaps/fruits/fruits-360/Training")
if not os.path.isdir('Apple'):
url = 'http://www.labs.earth/datasets/fruits360.zip'
request.urlretrieve(url, 'fruits360.zip')
#unzip file
with zipfile.ZipFile('fruits360.zip', 'r') as zip_ref:
zip_ref.extractall('./')
# delete file
os.remove('fruits360.zip')
print('done!')
# Let's get a list of all the subfolders of collections of fruits
labels = os.listdir()
print("Number of Labels:", len(labels))
print(labels)
###Output
_____no_output_____
###Markdown
Let's now look a little closer at the images in the training set. We will dive into the first subfolder (Apple Braeburn).
###Code
# Let's get a listing of all the images in the first subfolder.
data = os.listdir(labels[0])
print("Number of Images:", len(data))
###Output
_____no_output_____
###Markdown
We will use openCV to get some basic information on the images. From the shape of the pixel data we see that its a 100x100 pixel image with three channels (i.e., RGB).
###Code
# Import the openCV module
import cv2
# We will look at the first image in this first collection.
print(data[0])
# Use openCV to read the image into memory as an uncompressed bitmap
pixels = cv2.imread(labels[0] + '/' + data[0])
# Let's look at the shape of the image.
print(pixels.shape)
###Output
_____no_output_____
###Markdown
Let's look at a few more random images in this subfolder and see if they are all the same size and type.
###Code
# Our random selection of images
for index in [ 7, 26, 143 ]:
print(data[index])
# Use openCV to read the image into memory as an uncompressed bitmap
pixels = cv2.imread(labels[0] + '/' + data[index])
# Let's look at the shape of the image.
print(pixels.shape)
###Output
_____no_output_____
###Markdown
Okay, the are the same size.Let's look a different collection of fruits and see if they too are the same size. Let's use the 8th (index 7) subfolder.
###Code
# Our random selection of images
for index in [ 7, 26, 143 ]:
print(data[index])
# Use openCV to read the image into memory as an uncompressed bitmap
pixels = cv2.imread(labels[2] + '/' + data[index])
# Let's look at the shape of the image.
print(pixels.shape)
###Output
_____no_output_____
###Markdown
PracticeLet's do a practice run and preprocess one collection of fruit images.Note, how we specified the subfolder instead of a list for the parameter images. The initializer (constructor) looks at the parameter and if its not a list, but a string it presumes the parameter is a path to a folder with images.
###Code
images = Images(labels[0], 0)
print("TIME", images.time)
###Output
_____no_output_____
###Markdown
Perhaps our images won't need to be as big to train the CNN. Let's take a shot in the dark and say they only need to be 50x50. This will reduce the size of our data by 75%.
###Code
images = Images(labels[0], 0, config=['resize=(50,50)'])
print("TIME", images.time)
os.remove('collection.0_100.h5')
###Output
_____no_output_____
###Markdown
Prepare the DataThe labels of the fruits are names, but we need integer values to train the CNN. Since all the subfolder names (fruit name+variety) are in the list labels, we will use the index of the list as the labels.For brevity of time, we will only create machine learning ready data for three of the fruit collections (hence why we commented out the line for doing the entire set of fruits).
###Code
# Process all the Collections (subfolders) of Fruits
#images = Images(labels, [l for l in range(len(labels))], config=['resize=(50,50)'], name='fruits')
# For brevity, let's just do three of them
images = Images([labels[0], labels[1], labels[2]], [l for l in range(3)], config=['resize=(50,50)'], name='fruits')
print("TIME:", images.time)
###Output
_____no_output_____
###Markdown
Batch GenerationIn the full Kaggle Fruits360 dataset, the training and test data are in separate collections. Since for this code along we are just using a subset for demonstration, we will use only images from the training set and split part of it into our test set, as well as randomize the order of the training set. We will set the split to 20% test and use 42 as our random seed.
###Code
# Keep all the images for training, randomize their order
images.split = 0.2, 42
# Let's look at the (internal) _train property and verify that the indices of the images has been randomized.
images._train
###Output
_____no_output_____
###Markdown
Let's now split the data. Note how the method looks similar to sci-learn's train_test_split() function, but much simpler to use.
###Code
# When used as a getter, the split property will return the training / test data and labels the same as the sci-learn
# procedure train_test_split()
X_train, X_test, Y_train, Y_test = images.split
print("Number of Images", len(X_train))
print("Image Example", X_train[0])
print("Label", Y_train[0])
###Output
_____no_output_____
###Markdown
Let's create our mini-barch generator.
###Code
images.minibatch = 32
###Output
_____no_output_____
###Markdown
Construct the CNN
###Code
# Importing Tensorflow
import tensorflow as tf
from tensorflow.python.framework import ops
###Output
_____no_output_____
###Markdown
Input Vector and Output Vector and Hyperparameter PlaceholdersFor our first tensorflow step, we will setup a Tensorflow placeholders.We have four placeholders we need to declare, one for the input vector (pixel image data, one for the output vector (fruit classifier), one for the dropout rate and one for the learning rate.For our input placeholder (which we call X), we have 7500 features (pixels per image). For the output vector (which we call Y), we have have 3 classifiers (3 different fruits). In both cases, we set the second dimension of our vector to None. The None is a placeholder for the number of samples we will feed into the neural network at run-time. We also know that our data is floating point values between 0 and 1, so we will set the data type to float32.We will declare two more placeholders for setting some hyper-parameters, the percent to keep in the dropout layer (D) and the learning rate in the optimizer (L). Since both are scalar values, we will define their shape as a single value.
###Code
# Let's first reset our graph, so our neural network components are all declared within the same graph
ops.reset_default_graph()
X = tf.placeholder(tf.float32, [None, 50, 50, 3]) # shape = [batch, width, height, channels ]
Y = tf.placeholder(tf.float32, [None, 3]) # shape = [batch, number of labels ]
D = tf.placeholder(tf.float32, [])
L = tf.placeholder(tf.float32, [])
###Output
_____no_output_____
###Markdown
INPUT (CONVOLUTION) LAYERLet's now design our input convolution layer. For our convolutional layer, we will need a set of filters, weights for the filters and biases for the output. We will use 32 filters. Each filter will be 5 x 5 (pixels) in size and one channel (i.e., single plane) corresonding to grayscale image.Each input filter will need a weight (which our model will learn during training). The weight is multipled against the value of the input (filter), which we symbolically represent as Wx. Each output from the layer will need a bias (which our model will learn during training). The bias is added to the result of the weight multipled by the filter (Wx + b).Let's create two Tensorflow variables for our weights and biases. The weights (which we call W) will need to be a 4D matrix. The first two dimensions are the filter size (5 x 5), then the number of channels, and then the number of outputs, which will be 32.The bias will be a vector of size 32 (one for each output).We need to initialize our weights and biases to some initial value. We will initialize the weights using a random value initializer (normalized distribution) and initialize the biases to 0.1.
###Code
tf.set_random_seed(1) # Set the same seed to get the same initialization as in this demo.
# The weights for the input (convolutional) layer
# 5x5 pixel filter, 3 channels, 32 outputs (filters)
W1 = tf.Variable(tf.truncated_normal([5, 5 , 3, 32], stddev=0.1))
# The bias for the output from the input (convolutional) layer
b1 = tf.Variable(tf.constant(0.1, shape=[32]))
###Output
_____no_output_____
###Markdown
Let's put it together into an input (convolutional) layer. We will use the Tensorflow method tf.nn.conv2d() to apply the filters and the weights (our variable W1) against the inputs (our placeholder X), add in the bias (b1), and pass the output through a linear rectifier (RELU) activation function.- We need to reshape our flattened input data (X - which is our input placeholder) back into a 50x50 2D matric (bitmap) with three color channels - tf.reshape(X, [-1, 50, 50, 3])- We will set our stride for the sliding filters to move one pixel at a time in each direction.- We will set the padding when the filter moves past the edge of the bitmap to same.- Add the bias to the 32 outputs from our convolution.- Pass the outputs from the input (convolutional) layer through a RELU activation function.
###Code
# The first layer (2D Convolution)
Z1 = tf.nn.conv2d( input=X, # tf.reshape(X, [-1, 50, 50, 3]),
filter=W1,
strides=[1,1,1,1],
padding='SAME') + b1
A1 = tf.nn.relu(Z1)
# Let's look at the what the shape of the output tensor will be from the activation unit.
# As you can see, it will be 50x50 pixels with 32 channels.
print(A1)
###Output
_____no_output_____
###Markdown
MAX POOLING LAYERThe max pooling layer will have as input the output from the first layer, which is a 4D matrix (batch, height, width, channels), where the number of channels is 32. We will use a 2x2 pooling window over each channel, with a stride of 2.
###Code
# the second layer (max pooling)
Z2 = tf.nn.max_pool(A1, ksize=[1,2,2,1], strides=[1,2,2,1], padding='SAME')
# Let's look at the shape of the output tensor will be from the max pooling layer.
# As you can see, it has been downsampled to 25x25 pixels with 32 channels.
print(Z2)
###Output
_____no_output_____
###Markdown
FIRST HIDDEN LAYERThe first hidden layer will have as inputs the flatten outputs from max pooling layer and 256 outputs. Let's start by flattening the output from the max pooling layer.
###Code
F2 = tf.reshape(Z2, [-1, 25*25*32]) # Flatten each 25x25 pixel with 32 channels to single 1D vector
print(F2)
print(F2.get_shape()[1])
###Output
_____no_output_____
###Markdown
Each input will need a weight and each output a bias (which we will train). Each output will be passed through the linear rectifier unit (RELU) activation function.We will initialize the weights using a random value initializer (Xavier) and initialize the biases to zero.
###Code
# The return value from F2.get_shape() needs to be casted into an int.
W3 = tf.Variable(tf.truncated_normal([int(F2.get_shape()[1]), 256], stddev=0.1))
b3 = tf.Variable(tf.constant(0.1, shape=[256]))
###Output
_____no_output_____
###Markdown
Let's construct the first hidden layer- Create a node that will multiply the weights (W3) against the outputs of the max pooling layer (F2)- Create a node that adds the bias (b3) to the above node (F2 * W3).- Pass the output of the hidden layer through a dropout layer- Pass the outputs from the dropout layer through a RELU activation function
###Code
# The third layer (first hidden layer)
Z3 = tf.add(tf.matmul(F2, W3), b3)
# Let's add the dropout layer to the output signal from the second layer
D3 = tf.nn.dropout(Z3, keep_prob=D)
# Let's add the activation function to the output signal from the dropout layer
A3 = tf.nn.relu(D3)
###Output
_____no_output_____
###Markdown
SECOND HIDDEN LAYERThe second hidden layer will have 256 inputs (outputs from first hidden layer) and 20 outputs. Each input will need a weight and each output a bias (which we will train). Each output will be passed through the linear rectifier unit (RELU) activation function.We will initialize the weights using a random value initializer (Xavier) and initialize the biases to zero.
###Code
W4 = tf.get_variable("W4", [256, 20], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b4 = tf.get_variable("b4", [1, 20], initializer=tf.zeros_initializer())
###Output
_____no_output_____
###Markdown
Let's construct the second hidden layer- Create a node that will multiply the weights (W4) against the outputs of the first hidden layer (A3).- Create a node that adds the bias (b4) to the above node (W4 * A3)- Pass the outputs from the second hidden layer through a RELU activation function
###Code
# The fourth layer (second hidden layer)
Z4 = tf.add(tf.matmul(A3, W4), b4)
# Let's add the activation function to the output signal from the third layer
A4 = tf.nn.relu(Z4)
###Output
_____no_output_____
###Markdown
OUTPUT LAYERThe output layer will have 20 inputs (outputs from the second hidden layer) and 3 outputs (one for each type of fruit). Each input will need a weight and each output a bias (which we will train). The 3 outputs will be passed through a softmax activation function. We will initialize the weights using a random value initializer (Xavier) and initialize the biases to zero.
###Code
W5 = tf.get_variable("W5", [20, 3], initializer=tf.contrib.layers.xavier_initializer(seed=1))
b5 = tf.get_variable("b5", [1, 3], initializer=tf.zeros_initializer())
###Output
_____no_output_____
###Markdown
Let's construct the output layer- Create a node that will multiply the weights (W4) against the outputs of the second hidden layer (A3).- Create a node that adds the bias to the above node (W4 * A3).- Pass the outputs from the output layer through a SOFTMAX squashing function (done by the optimizer).
###Code
# The fifth layer (output layer)
Z5 = tf.add(tf.matmul(A4, W5), b5)
###Output
_____no_output_____
###Markdown
OPTIMIZERNow its time to design our optimizer. Let's start by designing our cost function. We will use the mean value of the softmax cross entropy between the predicted labels and actual labels. This is what we want to reduce on each batch.
###Code
# to fit the tensorflow requirement for tf.nn.softmax_cross_entropy_with_logits(...,...)
cost = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits_v2(logits=Z5, labels=Y))
###Output
_____no_output_____
###Markdown
Let's design our optimizer. This is the method that adjusts the values of the weights and biases, based on minimizing the cost value during training.We also need to set a learning rate. This is multiplied against the gradient calculation. It's used to prevent huge swings in setting weights which can result in either converging at a local (instead of global) optima, or not converging at all (infinite gradient). We will set the learning rate when we run the graph using the placeholder L.
###Code
# The learning rate for Gradient Descent algorithm
# learning_rate = 0.5
optimizer = tf.train.GradientDescentOptimizer(L).minimize(cost)
###Output
_____no_output_____
###Markdown
Run the GraphWe've built our Tensorflow graph for training our data. So, let's start training it.First, we need to call Tensorflow's global_variables_initializer() method to initialize the variables we've defined. We will create this as another node, which will be the first node we run (evaluate) in our graph.
###Code
init = tf.global_variables_initializer()
###Output
_____no_output_____
###Markdown
It's also a good idea to know how long your training takes, so let's import the time library.
###Code
import time
###Output
_____no_output_____
###Markdown
Let's set our hyperparameters.We need to set the number of epochs (that's how many times we run the training data through the neural network), and the batch size. The batch size is a small subset of the entire training set. We will be running a batch at a time per epoch. After each batch, then the cost is computed and backpropagated through the neural network.
###Code
import time
epochs = 25 # run a 25 epochs
batch_size = 32 # for each epoch, train in batches of 100 images
number_of_images = len(X_train) # number of images in training data
# Feed Dictionary Parameters
keep_prob = 0.9 # percent of outputs to keep in dropout layer
learning_rate = 0.02 # the learning rate for graident descent
def train():
start = time.time()
with tf.Session() as sess:
# Initialize the variables
sess.run(init)
# number of batches in an epoch
batches = number_of_images // batch_size
# run our training data through the neural network for each epoch
for epoch in range(epochs):
epoch_cost = 0
# Run the training data through the neural network
for batch in range(batches):
# Calculate the start and end indices for the next batch
begin = (batch * batch_size)
end = (batch * batch_size) + batch_size
# Get the next sequential batch from the training data
batch_xs, batch_ys = X_train[begin:end], Y_train[begin:end]
# Feed this batch through the neural network.
_, batch_cost = sess.run([optimizer, cost], feed_dict={X: batch_xs, Y: batch_ys, D: keep_prob, L: learning_rate})
epoch_cost += batch_cost
print("Epoch: ", epoch, epoch_cost / batches)
end = time.time()
print("Training Time:", end - start)
# Test the Model
# Let's select the highest percent from the softmax output per image as the prediction.
prediction = tf.equal(tf.argmax(Z5), tf.argmax(Y))
# Let's create another node for calculating the accuracy
accuracy = tf.reduce_mean(tf.cast(prediction, tf.float32))
# Now let's run our trainingt images through the model to calculate our accuracy during training
# Note how we set the keep percent for the dropout rate to 1.0 (no dropout) when we are evaluating the accuracy.
print ("Train Accuracy:", accuracy.eval({X: X_train, Y: Y_train, D: 1.0}))
# Now let's run our test images through the model to calculate our accuracy on the test data
print ("Test Accuracy:", accuracy.eval({X: X_test, Y: Y_test, D: 1.0}))
train()
os.remove('fruits.h5')
###Output
_____no_output_____ |
codes/labs_lecture03/lab04_train_vanilla_nn/train_vanilla_nn_exercise.ipynb | ###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'train_vanilla_nn_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CS5242_2021_codes/codes/labs_lecture03/lab04_train_vanilla_nn'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
FASHION-MNIST dataset missing - downloading...
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to ../../data/fashion-mnist/temp/FashionMNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
torch.Size([10000, 28, 28])
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
self.linear_layer = nn.Linear(input_size, output_size, bias = True)
def forward(self, x):
x = self.linear_layer(x)# complete here
p = torch.softmax(x, dim=1)# complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
one_layer_net(
(linear_layer): Linear(in_features=784, out_features=10, bias=True)
)
###Markdown
Take the 4th image of the test set:
###Code
im= test_data[3]# complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
print(im.size())
p = net(im.view(1, 784))# complete here
print(p)
###Output
torch.Size([28, 28])
tensor([[0.1141, 0.0700, 0.1264, 0.0827, 0.1080, 0.1134, 0.1019, 0.0713, 0.1054,
0.1068]], grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
print(train_label[0])
print(train_label[0].view(1))
###Output
tensor(9)
tensor([9])
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
# 训练 5000 次
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
# complete here
# complete here
# 防止 neural network 记住顺序,能提高 5% 的预测率
idx = randint(0, 60000-1)
input = train_data[idx].view(1, 784)
# label 需要 dimension
label = train_label[idx].view(1)
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
prob = net(input)
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= test_data[3]# complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = net(im.view(1, 784))# complete here
print(p)
###Output
tensor([[1.8632e-02, 1.7783e-03, 2.3617e-01, 1.2829e-02, 2.8700e-02, 1.7743e-03,
6.9409e-01, 1.3615e-04, 3.0081e-03, 2.8846e-03]],
grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'train_vanilla_nn_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/AI6103_2020_codes/codes/labs_lecture03/lab04_train_vanilla_nn'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
_____no_output_____
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
_____no_output_____
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
def forward(self, x):
x = # complete here
p = # complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
_____no_output_____
###Markdown
Take the 4th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
# complete here
# complete here
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'train_vanilla_nn_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
_____no_output_____
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
_____no_output_____
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
def forward(self, x):
x = # complete here
p = # complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
_____no_output_____
###Markdown
Take the 4th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
# complete here
# complete here
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
_____no_output_____
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
_____no_output_____
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
def forward(self, x):
x = # complete here
p = # complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
_____no_output_____
###Markdown
Take the 4th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
# complete here
# complete here
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
from google.colab import drive
drive.mount('/content/gdrive')
file_name = 'train_vanilla_nn_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
print(path_to_file)
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
torch.Size([60000, 28, 28])
torch.Size([60000])
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
torch.Size([10000, 28, 28])
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
self.mylayer = nn.Linear(input_size, output_size, bias = False)
def forward(self, x):
x = self.mylayer(x)
p = F.softmax(x, dim=1)
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
one_layer_net(
(mylayer): Linear(in_features=784, out_features=10, bias=False)
)
###Markdown
Take the 4th image of the test set:
###Code
im = test_data[4]
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = net(im.view(1,784))
print(p)
###Output
tensor([[0.0805, 0.0940, 0.0954, 0.0797, 0.0891, 0.0641, 0.1320, 0.1073, 0.0898,
0.1679]], grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
idx = randint(0, 59999)
input = train_data[idx].view(1, 784)
label = train_label[idx].view(1)
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
prob = net(input)
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im = test_data[34]
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = net(im.view(1,784))
print(p)
###Output
tensor([[6.4732e-04, 2.1396e-05, 5.5238e-03, 5.0491e-03, 1.0980e-02, 6.1283e-02,
3.7357e-03, 4.6317e-04, 9.1226e-01, 4.1702e-05]],
grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
path_to_file = '/content/gdrive/My Drive/CS4243_codes/codes/labs_lecture03/lab04_train_vanilla_nn'
print(path_to_file)
# move to Google Drive directory
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
_____no_output_____
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
_____no_output_____
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
def forward(self, x):
x = # complete here
p = # complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
_____no_output_____
###Markdown
Take the 4th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
# complete here
# complete here
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'train_vanilla_nn_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CE7454_2020_codes/codes/labs_lecture03/lab04_train_vanilla_nn'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
_____no_output_____
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
_____no_output_____
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
def forward(self, x):
x = # complete here
p = # complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
_____no_output_____
###Markdown
Take the 4th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
# complete here
# complete here
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'train_vanilla_nn_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CS5242_2021_codes/codes/labs_lecture03/lab04_train_vanilla_nn'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
_____no_output_____
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
_____no_output_____
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
def forward(self, x):
x = # complete here
p = # complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
_____no_output_____
###Markdown
Take the 4th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
# complete here
# complete here
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = # complete here
print(p)
###Output
_____no_output_____
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
# find automatically the path of the folder containing "file_name" :
file_name = 'train_vanilla_nn_exercise.ipynb'
import subprocess
path_to_file = subprocess.check_output('find . -type f -name ' + str(file_name), shell=True).decode("utf-8")
path_to_file = path_to_file.replace(file_name,"").replace('\n',"")
# if previous search failed or too long, comment the previous line and simply write down manually the path below :
#path_to_file = '/content/gdrive/My Drive/CS5242_2021_codes/codes/labs_lecture03/lab04_train_vanilla_nn'
print(path_to_file)
# change current path to the folder containing "file_name"
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
FASHION-MNIST dataset missing - downloading...
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz
Downloading http://fashion-mnist.s3-website.eu-central-1.amazonaws.com/train-images-idx3-ubyte.gz to ../../data/fashion-mnist/temp/FashionMNIST/raw/train-images-idx3-ubyte.gz
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
torch.Size([10000, 28, 28])
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
self.mylayer = nn.Linear(input_size, output_size) # complete here
def forward(self, x):
x = self.mylayer(x)# complete here
p = torch.softmax(x, dim=1)# complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
one_layer_net(
(mylayer): Linear(in_features=784, out_features=10, bias=True)
)
###Markdown
Take the 4th image of the test set:
###Code
im= test_data[4]# complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = net(im.view(1,-1))# complete here
print(p)
###Output
tensor([[0.1235, 0.1056, 0.1040, 0.0929, 0.1301, 0.0836, 0.0945, 0.1017, 0.0827,
0.0814]], grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
idx = randint(0, 60000-1)
input = train_data[idx].view(1, -1)
label = train_label[idx].view(1)
# complete here
# complete here
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
prob = net(input)
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= test_data[34]
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = net(im.view(1,-1)) # complete here
print(p)
###Output
tensor([[1.2255e-04, 9.3779e-06, 6.5142e-02, 7.0174e-03, 9.8461e-04, 5.2184e-03,
8.2921e-04, 1.0338e-04, 9.2038e-01, 1.9643e-04]],
grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Lab 04 : Train vanilla neural network -- exercise Training a one-layer net on FASHION-MNIST
###Code
# For Google Colaboratory
import sys, os
if 'google.colab' in sys.modules:
# mount google drive
from google.colab import drive
drive.mount('/content/gdrive')
path_to_file = '/content/gdrive/My Drive/CS4243_codes/codes/labs_lecture03/lab04_train_vanilla_nn'
print(path_to_file)
# move to Google Drive directory
os.chdir(path_to_file)
!pwd
import torch
import torch.nn as nn
import torch.optim as optim
from random import randint
import utils
###Output
_____no_output_____
###Markdown
Download the TRAINING SET (data+labels)
###Code
from utils import check_fashion_mnist_dataset_exists
data_path=check_fashion_mnist_dataset_exists()
train_data=torch.load(data_path+'fashion-mnist/train_data.pt')
train_label=torch.load(data_path+'fashion-mnist/train_label.pt')
print(train_data.size())
print(train_label.size())
###Output
torch.Size([60000, 28, 28])
torch.Size([60000])
###Markdown
Download the TEST SET (data only)
###Code
test_data=torch.load(data_path+'fashion-mnist/test_data.pt')
print(test_data.size())
###Output
torch.Size([10000, 28, 28])
###Markdown
Make a one layer net class
###Code
class one_layer_net(nn.Module):
def __init__(self, input_size, output_size):
super(one_layer_net , self).__init__()
# complete here
# 一层全连接网络MLP
self.linear_layer = nn.Linear( input_size, output_size , bias=False)
def forward(self, x):
x = self.linear_layer(x) # complete here
# 使用softmax作为激活函数并输出
# 注意对每一行的所有元素进行softmax运算
# 列是不正确的
p = torch.softmax(x, dim=1) # complete here
return p
###Output
_____no_output_____
###Markdown
Build the net
###Code
net=one_layer_net(784,10)
print(net)
###Output
one_layer_net(
(linear_layer): Linear(in_features=784, out_features=10, bias=False)
)
###Markdown
Take the 4th image of the test set:
###Code
im = test_data[4] # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
And feed it to the UNTRAINED network:
###Code
p = net( im.view(1,784) ) # complete here
print(p)
###Output
tensor([[0.0978, 0.1333, 0.0657, 0.1311, 0.0667, 0.1410, 0.0889, 0.0842, 0.0823,
0.1090]], grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(p)
###Output
_____no_output_____
###Markdown
Train the network (only 5000 iterations) on the train set
###Code
criterion = nn.NLLLoss()
optimizer=torch.optim.SGD(net.parameters() , lr=0.01 )
for iter in range(1,5000):
# choose a random integer between 0 and 59,999
# extract the corresponding picture and label
# and reshape them to fit the network
# complete here
idx=randint(0, 60000-1)
# complete here
input=train_data[idx].view(1,784)
# complete here
label=train_label[idx].view(1)
# feed the input to the net
input.requires_grad_() # for backprobagation -- we will discuss it later
# complete here
prob=net(input)
# update the weights (all the magic happens here -- we will discuss it later)
log_prob=torch.log(prob)
loss = criterion(log_prob, label)
optimizer.zero_grad()
loss.backward()
optimizer.step()
###Output
_____no_output_____
###Markdown
Take the 34th image of the test set:
###Code
im= test_data[34] # complete here
utils.show(im)
###Output
_____no_output_____
###Markdown
Feed it to the TRAINED net:
###Code
p = net( im.view(1,784)) # complete here
print(p)
###Output
tensor([[1.6817e-04, 1.3209e-05, 1.4200e-02, 1.8699e-03, 4.3869e-03, 3.4380e-02,
7.6644e-03, 2.3668e-04, 9.3696e-01, 1.1896e-04]],
grad_fn=<SoftmaxBackward>)
###Markdown
Display visually the confidence scores
###Code
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____
###Markdown
Choose image at random from the test set and see how good/bad are the predictions
###Code
# choose a picture at random
idx=randint(0, 10000-1)
im=test_data[idx]
# diplay the picture
utils.show(im)
# feed it to the net and display the confidence scores
prob = net( im.view(1,784))
utils.show_prob_fashion_mnist(prob)
###Output
_____no_output_____ |
DiscTest.ipynb | ###Markdown
CS 477 HW 7: Gradient Descent on Neural Networks Chris TralieThis is a simple test to make sure our neural network engine is able to separate the inside of a circle from the outside, which would not work with logistic regression over the two dimensions
###Code
%load_ext autoreload
%autoreload 2
import numpy as np
import matplotlib.pyplot as plt
import matplotlib.animation as animation
import IPython.display as ipd
from neuralnet import *
from layers import *
from losses import *
###Output
_____no_output_____
###Markdown
First, let's generate the data
###Code
def get_disc_points(N):
X = np.random.randn(N, 2)
d = np.sqrt(np.sum(X**2, axis=1))
ys = np.array(d < 1, dtype=float)
X[ys == 0, :] *= 1.1 # Put a small gap between inner and outer points
return X, ys
np.random.seed(0)
X, ys = get_disc_points(1000)
plt.scatter(X[ys==0, 0], X[ys==0, 1])
plt.scatter(X[ys==1, 0], X[ys==1, 1])
plt.axis("equal")
plt.title("Intial Disc Data")
###Output
_____no_output_____
###Markdown
Here's some code we'll use to plot the second to last layer with two neurons, which will indicate how well the last linear separator can do
###Code
def plot_2d_separator_predictions(X1, X2, a, b, c, draw_lines=False):
"""
Plot the performance of a 2D linear separator on a set of binary labeled data.
This is applicable to any layer which takes two neurons to one
Parameters
----------
X1: ndarray(N1, 2)
Input coordinates for class 1
X2: ndarray(N, 2)
Input coordinates for class 2
a: float
Weight for first coordinate
b: float
Weight for second coordinate
c: float
Bias
"""
plot = [plt.scatter(X1[:, 0], X1[:, 1], 1, c='C0')]
plot.append(plt.scatter(X2[:, 0], X2[:, 1], 1, c='C1'))
X = np.concatenate((X1, X2), axis=0)
xmin = np.min(X, axis=0)
xmax = np.max(X, axis=0)
iv = max(xmax[1]-xmin[1], xmax[0]-xmin[0])
p0 = -c*np.array([a, b])/(a**2 + b**2)
v = np.array([-b, a])
mag = np.sqrt(np.sum(v**2))
wrong = 0
if mag > 0:
v = v/mag
p = p0 - 2*iv*v
q = p0 + 2*iv*v
plot += plt.plot([p[0], q[0]], [p[1], q[1]], c='k', linestyle='--')
rg = xmax[0] - xmin[0]
plt.xlim([xmin[0]-0.2*rg, xmax[0]+0.2*rg])
rg = xmax[1] - xmin[1]
plt.ylim([xmin[1]-0.2*rg, xmax[1]+0.2*rg])
wrong = 0
for x in X1:
proj = p0 + np.sum(v*(x-p0))*v
y = a*x[0] + b*x[1] + c
if draw_lines:
plot += plt.plot([x[0], proj[0]], [x[1], proj[1]], c='C0', linewidth=1, linestyle='--')
if y > 0:
plot.append(plt.scatter([x[0]], [x[1]], 1, c='C0', marker='x'))
wrong += 1
for x in X2:
proj = p0 + np.sum(v*(x-p0))*v
if draw_lines:
plot += plt.plot([x[0], proj[0]], [x[1], proj[1]], c='C1', linewidth=1, linestyle='--')
y = a*x[0] + b*x[1] + c
if y < 0:
plot.append(plt.scatter([x[0]], [x[1]], 1, c='C1', marker='x'))
wrong += 1
total = X1.shape[0] + X2.shape[0]
plt.xticks([])
plt.yticks([])
plot.append(plt.text(0.5, 1.01,
"{} / {} Correct ({:.1f}%)".format(total-wrong, total, 100*(total-wrong)/total),
horizontalalignment='center', verticalalignment='bottom',
transform=plt.gca().transAxes, size='xx-large'))
return plot
###Output
_____no_output_____
###Markdown
Finally, let's setup a neural network and train it! We'll put 100 neurons in the first hidden layer, followed by 2 neurons, followed by a single neuron with the logistic activation. Since the last hidden layer has 2 neurons, we can the coordinates on the data mapped through it to see how well it's being separated
###Code
plot_animation=True
np.random.seed(3)
nn = NeuralNet(2, logistic_est_loss_deriv) # Input is in 2 dimensions, and we want to use logistic loss
nn.add_layer(100, leaky_relu, leaky_relu_deriv) # First layer is 100 dimensions with a leaky ReLU
nn.add_layer(2, leaky_relu, leaky_relu_deriv) # Second layer is 2 dimensions with a leaky ReLU
nn.add_layer(1, logistic, None) # Last layer is the logistic function. Its derivative is handled separately
n_iters = 30
alpha = 0.001
losses = []
fig = plt.figure(figsize=(12, 6))
frames = []
for it in range(n_iters):
loss = 0
X1 = []
X2 = []
for k in range(X.shape[0]):
y_est = nn.forward(X[k, :])
loss += logistic_est_loss(y_est, ys[k])
if ys[k] == 0:
X1.append(nn.h[2])
else:
X2.append(nn.h[2])
print("Iteration {} Loss {}".format(it, loss))
losses.append(loss)
X1 = np.array(X1)
X2 = np.array(X2)
# Plot Result
# Get the 2D linear separator from the weights/bias in the last layer
a, b = nn.Ws[-1].flatten()
c = nn.bs[-1][0]
plt.subplot(121)
plot = plot_2d_separator_predictions(X1, X2, a, b, c)
plt.gca().set_facecolor("white")
plt.subplot(122)
plot += plt.plot(losses, c='C0')
plt.xlabel("Iteration")
plt.ylabel("Loss")
plt.xlim([0, n_iters])
plt.ylim([0, np.max(losses)])
plot.append(plt.text(0.5, 1.01, "Iteration {} Loss {:.3f}".format(it, loss[0]),
horizontalalignment='center', verticalalignment='bottom',
transform=plt.gca().transAxes, size='xx-large'))
plt.gca().set_facecolor("white")
frames.append(plot)
# Stochastic gradient descent
for k in np.random.permutation(X.shape[0]):
nn.backprop_descent(X[k, :], ys[k], alpha)
ani = animation.ArtistAnimation(fig, frames, interval=200, blit=True, repeat_delay=1000)
ani.save("result.gif", dpi=200)
###Output
_____no_output_____ |
notebooks/classification_loki_deepLearning_resNet-selfTrained-101.ipynb | ###Markdown
resnet - self-trained lsls
###Code
# Set the path to the root folder containing the training data.
# If you want to have access to the data please contact ...
basePath = ''
imgDir = basePath + 'images/Trainingsdatensatz_cropped_scaled/'
trainTsv = basePath + 'tsvDatein/final_dataset_splitting/train.tsv'
validTsv = basePath + 'tsvDatein/final_dataset_splitting/val.tsv'
testTsv = basePath + 'tsvDatein/final_dataset_splitting/test.tsv'
whitelist = basePath + 'whitelist/whitelist1.txt'
saveDir = basePath + 'experiments/ResNet_selfTrained_101/'
imgShape = (1000,1000)
num_classes = 11
batch_size = 1
max_epochs = 40
preTrained = False
#Imports
import csv
from keras_applications.resnet import ResNet101, preprocess_input
import keras
import keras_applications
keras_applications.set_keras_submodules(
backend=keras.backend,
layers=keras.layers,
models=keras.models,
utils=keras.utils
)
from keras.utils import to_categorical, Sequence
from keras.layers import Dense, GlobalAveragePooling2D, Dropout
from keras.models import Model
import numpy as np
from skimage import io
from keras.models import model_from_json
from sklearn.metrics import confusion_matrix
from sklearn.metrics import classification_report
import matplotlib.pyplot as plt
from pandas import DataFrame
from contextlib import redirect_stdout
import keras
import pydot
import pydotplus
from keras.utils.vis_utils import plot_model
from IPython.display import SVG
from keras.utils.vis_utils import model_to_dot
from keras.models import load_model
from keras.callbacks import EarlyStopping
from keras.callbacks import ModelCheckpoint
from keras.callbacks import CSVLogger
import time
import random
# Kopie aus der mgLearn Bibliothek.
def heatmap(values, xlabel, ylabel, xticklabels, yticklabels, cmap=None,
vmin=None, vmax=None, ax=None, fmt="%0.2f"):
if ax is None:
ax = plt.gca()
# plot the mean cross-validation scores
img = ax.pcolor(values, cmap=cmap, vmin=vmin, vmax=vmax)
img.update_scalarmappable()
ax.set_xlabel(xlabel)
ax.set_ylabel(ylabel)
ax.set_xticks(np.arange(len(xticklabels)) + 0.5)
ax.set_yticks(np.arange(len(yticklabels)) + 0.5)
ax.set_xticklabels(xticklabels, rotation='vertical')
ax.set_yticklabels(yticklabels)
ax.set_aspect(1)
for p, color, value in zip(img.get_paths(), img.get_facecolors(),
img.get_array()):
x, y = p.vertices[:-2, :].mean(0)
if np.mean(color[:3]) > 0.5:
c = 'k'
else:
c = 'w'
ax.text(x, y, fmt % value, color=c, ha="center", va="center", fontsize=12)
return img
# https://stackoverflow.com/a/43186440
class TimeHistory(keras.callbacks.Callback):
def on_train_begin(self, logs={}):
self.times = []
def on_epoch_begin(self, batch, logs={}):
self.epoch_time_start = time.time()
def on_epoch_end(self, batch, logs={}):
self.times.append(time.time() - self.epoch_time_start)
path = saveDir + "history/timeHistory.csv"
with open(path,'a') as fd:
fd.write(str(time.time()-self.epoch_time_start) + "\n")
def readTargetList(tsv, target_map):
# Read target list.
with open(tsv) as f:
reader = csv.reader(f,delimiter='\t')
target = []
imgId = []
next(reader)
i = 0
for class_name in reader:
if class_name[14] == "":
print(className)
continue
target.append([value for key, value in target_map.items() if key in class_name[14]]) # nur substring ist wichtig
imgId.append(class_name[0])
i = i+1
return target, imgId
def getImgPaths(imgId):
filenames = (str(idx) + '.jpg' for idx in imgId)
return [imgDir + filename for filename in filenames]
def bounds(old_size, new_size):
if new_size >= old_size:
return (0, old_size)
else:
diff = old_size - new_size
low = diff // 2 + diff % 2
high = low + new_size
return (low, high)
def crop_image(img, shape):
left, right = bounds(img.shape[0], shape[0])
top, bottom = bounds(img.shape[1], shape[1])
img = img[left:right, top:bottom]
img = img[:, :,np.newaxis]
return img
# Sequence class for training using lazy batches of images.
# See example at https://keras.io/utils/#sequence
#
# `X_set` is list of path to the images, and `y_set` are the associated classes.
#
class LokiImageSequence(Sequence):
def __init__(self, X_set, y_set, batch_size, image_shape):
self._X = list(X_set)
self._y = list(y_set)
self._batch_size = batch_size
self._image_shape = image_shape
def __len__(self):
return int(np.ceil(len(self._X) / float(self._batch_size)))
def __getitem__(self, idx):
batch_X = self._X[idx * self._batch_size:(idx + 1) * self._batch_size]
batch_y = self._y[idx * self._batch_size:(idx + 1) * self._batch_size]
x = []
for file_name in batch_X:
z = io.imread(file_name)
t = crop_image(z,self._image_shape)
d = t[:,:,0]
b = np.repeat(d[..., np.newaxis], 3, -1)
x.append(b)
x = preprocess_input(np.array(x))
return(np.array(x), np.array(batch_y, dtype=np.int8))
# Datapreparation
# tsv einlesen und Generator erstellen
with open(whitelist) as f:
inverse_target_map = dict(enumerate(f))
target_map = {v[:-1]: k for (k, v) in inverse_target_map.items()}
num_classes=(1 + max(inverse_target_map))
trainTarget, trainImgId = readTargetList(trainTsv, target_map)
validTarget, validImgId = readTargetList(validTsv, target_map)
testTarget, testImgId = readTargetList(testTsv, target_map)
# shuffle
combined = list(zip(trainTarget, trainImgId))
random.shuffle(combined)
trainTarget[:], trainImgId[:] = zip(*combined)
# shuffle
combined = list(zip(validTarget, validImgId))
random.shuffle(combined)
validTarget[:], validImgId[:] = zip(*combined)
# shuffle
combined = list(zip(testTarget, testImgId))
random.shuffle(combined)
testTarget[:], testImgId[:] = zip(*combined)
# Test anzahl an Bildern beschränken
#trainTarget = trainTarget[:10]
#trainImgId = trainImgId[:10]
#validTarget = validTarget[:10]
#validImgId = validImgId[:10]
#testTarget = testTarget[:10]
#testImgId = testImgId[:10]
# image file paths
X_trainImgPath = getImgPaths(trainImgId)
X_validImgPath = getImgPaths(validImgId)
X_testImgPath = getImgPaths(testImgId)
# Convert class vectors to binary class matrices (format required by Keras).
y_train = to_categorical(trainTarget, num_classes)
y_valid = to_categorical(validTarget, num_classes)
y_test = to_categorical(testTarget, num_classes)
# Constructing sequences
train_seq = LokiImageSequence(X_trainImgPath, y_train, batch_size, imgShape)
valid_seq = LokiImageSequence(X_validImgPath, y_valid, batch_size, imgShape)
test_seq = LokiImageSequence(X_testImgPath, y_test, batch_size, imgShape)
print("Length trainingsset: " + str(len(y_train)))
print("Length validationset: " + str(len(y_valid)))
print("Length testset: " + str(len(y_test)))
print("Number of classes: " + str(num_classes))
# Model customization
if preTrained:
base_model = ResNet101(weights='imagenet', include_top=False, input_shape=(1000,1000,3))
else:
base_model = ResNet101(weights=None, include_top=False, input_shape=(1000,1000,3))
x = base_model.output
x = GlobalAveragePooling2D(name='avg_pool')(x)
x = Dropout(0.4)(x)
predictions = Dense(num_classes, activation='softmax')(x)
model = Model(inputs=base_model.input, outputs=predictions)
# Freeze all layers
if preTrained:
for layer in base_model.layers:
layer.trainable = False
# evtl. anpassen
model.compile(optimizer='rmsprop',
loss='categorical_crossentropy',
metrics=['accuracy'])
csv_logger_callback = CSVLogger(saveDir + "history/model_history_log.csv", append=True)
checkpointEveryEpoch_callback = ModelCheckpoint(saveDir + "modelFiles/saved-model-{epoch:02d}-{val_acc:.2f}.hdf5", monitor='val_acc', verbose=1, save_best_only=False, mode='max')
time_callback = TimeHistory()
earlyStopping_callback = EarlyStopping(monitor='val_loss', mode='min', verbose=1, patience=20, min_delta = 0.01)
modelCheckpoint_callback = ModelCheckpoint(saveDir + 'modelFiles/best_model.h5', monitor='val_loss', verbose=1)
callback_list = [time_callback, earlyStopping_callback, modelCheckpoint_callback, checkpointEveryEpoch_callback,csv_logger_callback]
# Transfer learning
history = model.fit_generator(train_seq,
epochs = max_epochs,
validation_data = valid_seq,
callbacks = callback_list)
# Speichern Model und weights
model_json = model.to_json()
with open(saveDir + "modelFiles/model.json", "w") as json_file:
json_file.write(model_json)
model.save_weights(saveDir + "modelFiles/weights.h5")
# load a saved model
model = load_model(saveDir + 'modelFiles/best_model.h5')
###Output
_____no_output_____
###Markdown
history
###Code
# convert the history.history dict to a pandas DataFrame:
hist_df = DataFrame(history.history)
# save to json:
hist_json_file = saveDir + 'history/history.json'
with open(hist_json_file, mode='w') as f:
hist_df.to_json(f)
# summarize history for accuracy
plt.figure(figsize=(10,5))
plt.plot(history.history['acc'])
plt.plot(history.history['val_acc'])
plt.title('model accuracy')
plt.ylabel('accuracy')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.savefig(saveDir + 'history/accuracy.svg', transparent = True, bbox_inches='tight')
plt.show()
# summarize history for loss
plt.figure(figsize=(10,5))
plt.plot(history.history['loss'])
plt.plot(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train', 'validation'], loc='upper left')
plt.savefig(saveDir + 'history/loss.svg', transparent = True, bbox_inches='tight')
plt.show()
keras.utils.vis_utils.pydot = pydot
plot_model(model, to_file=saveDir+'model_architecture_charts/model_small.png')
def plot_keras_model_verbose(model, show_shapes=True, show_layer_names=True):
return SVG(model_to_dot(model, show_shapes=show_shapes,
show_layer_names=show_layer_names).create(prog='dot',format='svg'))
svg = plot_keras_model_verbose(model, show_shapes=True, show_layer_names=False)
with open(saveDir + "model_architecture_charts/model_verbose.svg", "w") as txt:
txt.write(svg.data)
svg
# Save mode summary
with open(saveDir + 'model_architecture_charts/model_summary.txt', 'w') as f:
with redirect_stdout(f):
model.summary()
###Output
_____no_output_____
###Markdown
Trainings duration
###Code
times = time_callback.times
df = DataFrame(times)
df.to_csv (saveDir + r'trainingsDuration/durationPerEpoch.csv')
sum = df.sum()
sum.to_csv(saveDir + r'trainingsDuration/durationSum.csv')
print(sum)
avg = df.mean()
avg.to_csv(saveDir + r'trainingsDuration/durationAvgPerEpoch.csv')
print(avg)
predValid = model.predict_generator(valid_seq)
predTest = model.predict_generator(test_seq)
loss, acc = model.evaluate_generator(test_seq)
###Output
_____no_output_____
###Markdown
Validationset
###Code
trueClassNum=[]
for x in y_valid:
ind = np.array(x).argmax()
y = ind
trueClassNum.append(y)
trueClassName = []
for f in trueClassNum:
trueClassName.append(inverse_target_map[f][:-1])
predMultilabelAll=[]
predProbabilityAll = []
counter = 0
for x in predValid:
maxProb = x.max()
predProbabilityAll.append(maxProb)
ind = x.argmax()
y = [0]*len(x)
y[ind]=1
predMultilabelAll.append(y)
counter +=1
# Convert to int
predClassNum=[]
for x in predMultilabelAll:
ind = np.array(x).argmax()
y = ind
predClassNum.append(y)
# Convert to name
predClassName = []
for f in predClassNum:
predClassName.append(inverse_target_map[f][:-1])
cl = classification_report(trueClassName, predClassName, output_dict=True)
df = DataFrame(cl).transpose()
df.to_csv (saveDir + r'classification_reports/valid.csv', index = True, header=True)
df
plt.figure(figsize=(15,15))
cm = confusion_matrix(trueClassName, predClassName)
df = DataFrame(cm)
df.to_csv (saveDir + r'confusion_matrix/valid_total.csv', index = True, header=True)
hm = heatmap(
cm, xlabel='Predicted label',
ylabel='True label', xticklabels=np.unique(trueClassName),
yticklabels=np.unique(trueClassName), cmap=plt.cm.gray_r, fmt="%d")
plt.title("Total values \n")
plt.colorbar(hm)
plt.gca().invert_yaxis()
plt.savefig(saveDir + 'confusion_matrix/valid_total.svg', transparent = True, bbox_inches='tight')
plt.figure(figsize=(15,15))
cm = confusion_matrix(trueClassName, predClassName)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
df = DataFrame(cm)
df.to_csv (saveDir + r'confusion_matrix/valid_normalised.csv', index = True, header=True)
plt.figure(figsize=(20,20))
cm = heatmap(
cm, xlabel='Predicted label',
ylabel='True label', xticklabels=np.unique(trueClassName),
yticklabels=np.unique(trueClassName), cmap=plt.cm.gray_r)
plt.title("Normalised values\n")
plt.colorbar(cm)
plt.gca().invert_yaxis()
plt.savefig(saveDir + 'confusion_matrix/valid_normalised.svg', transparent = True, bbox_inches='tight')
# Save pred and prob to tsv
df = DataFrame(list(zip(validImgId,trueClassName, predClassName,predProbabilityAll )),
columns =['ImgId', 'True', 'Predicted', 'Probability'])
df = df.set_index('ImgId')
df.to_csv (saveDir + r'predictions/valid.csv', index = True, header=True)
###Output
_____no_output_____
###Markdown
Testset
###Code
trueClassNum=[]
for x in y_test:
ind = np.array(x).argmax()
y = ind
trueClassNum.append(y)
trueClassName = []
for f in trueClassNum:
trueClassName.append(inverse_target_map[f][:-1])
predMultilabelAll=[]
predProbabilityAll = []
counter = 0
for x in predTest:
maxProb = x.max()
predProbabilityAll.append(maxProb)
ind = x.argmax()
y = [0]*len(x)
y[ind]=1
predMultilabelAll.append(y)
counter +=1
# Convert to int
predClassNum=[]
for x in predMultilabelAll:
ind = np.array(x).argmax()
y = ind
predClassNum.append(y)
# Convert to name
predClassName = []
for f in predClassNum:
predClassName.append(inverse_target_map[f][:-1])
cl = classification_report(trueClassName, predClassName,output_dict=True)
df = DataFrame(cl).transpose()
df.to_csv (saveDir + r'classification_reports/test.csv', index = True, header=True)
df
plt.figure(figsize=(15,15))
cm = confusion_matrix(trueClassName, predClassName)
df = DataFrame(cm)
df.to_csv (saveDir + r'confusion_matrix/test_total.csv', index = True, header=True)
hm = heatmap(
cm, xlabel='Predicted label',
ylabel='True label', xticklabels=np.unique(trueClassName),
yticklabels=np.unique(trueClassName), cmap=plt.cm.gray_r, fmt="%d")
plt.title("Total values \n")
plt.colorbar(hm)
plt.gca().invert_yaxis()
plt.savefig(saveDir + 'confusion_matrix/test_total.svg', transparent = True, bbox_inches='tight')
plt.figure(figsize=(15,15))
cm = confusion_matrix(trueClassName, predClassName)
cm = cm.astype('float') / cm.sum(axis=1)[:, np.newaxis]
df = DataFrame(cm)
df.to_csv (saveDir + r'confusion_matrix/test_normalised.csv', index = True, header=True)
plt.figure(figsize=(20,20))
cm = heatmap(
cm, xlabel='Predicted label',
ylabel='True label', xticklabels=np.unique(trueClassName),
yticklabels=np.unique(trueClassName), cmap=plt.cm.gray_r)
plt.title("Normalised values\n")
plt.colorbar(cm)
plt.gca().invert_yaxis()
plt.savefig(saveDir + 'confusion_matrix/test_normalised.svg', transparent = True, bbox_inches='tight')
# Save pred and prob to tsv
df = DataFrame(list(zip(validImgId,trueClassName, predClassName,predProbabilityAll )),
columns =['ImgId', 'True', 'Predicted', 'Probability'])
df = df.set_index('ImgId')
df.to_csv (saveDir + r'predictions/test.csv', index = True, header=True)
modelName = "Modelname: " + base_model.name
trained = "Pre-Trained: " + str(preTrained)
overallRuntime = "Overall runtime: " + str(sum.get_values()[0]) + "s"
runtimePerEpoch = "Avg. runtime per Epoch: " + str(avg.get_values()[0]) + "s"
dsImg = "Dataset image: " + imgDir
dsTrain = "Dataset train: " + trainTsv
dsValid = "Dataset validation: " + validTsv
dsTest = "Dataset test: " + testTsv
testAcc = "Accuracy testset: " + str(acc)
testLoss = "Loss testset: " + str(loss)
numEpochs = "Num. Epochs: " + str(len(history.epoch))
earlyStop = "Early stop (0 if it didn't stop early): " + str(earlyStopping_callback.stopped_epoch)
with open(saveDir + 'README.txt','w') as out:
out.write('{}\n{}\n\n{}\n{}\n\n{}\n{}\n{}\n\n{}\n{}\n{}\n{}\n{}\n'.format(modelName,
trained,
testAcc,
testLoss,
numEpochs,
overallRuntime,
runtimePerEpoch,
dsImg,
dsTrain,
dsValid,
dsTest,
earlyStop
))
earlyStopping_callback.stopped_epoch
###Output
_____no_output_____ |
docs/source/examples/Output Widget.ipynb | ###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
Note that `append_display_data` cannot currently be used to display widgets. The status of this bug is tracked in [this issue](https://github.com/jupyter-widgets/ipywidgets/issues/1811). We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behaviour in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
thread.join()
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
Note that `append_display_data` cannot currently be used to display widgets. The status of this bug is tracked in [this issue](https://github.com/jupyter-widgets/ipywidgets/issues/1811). We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behaviour in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
thread.join()
###Output
_____no_output_____
###Markdown
Show a single Output with : SingleOutput WidgetSometimes, you want to display only one element into the Output widget. Or you want to erase the previous element in the Output when you modify it. To do this very easily, we create a class `SingleOutput` to display a single element in the Output. And when the Output shows a new element or a modify element, the widget erase the previous value displayed.
###Code
import ipywidgets as widgets
w = widgets.SingleOutput(value=7)
w
###Output
_____no_output_____
###Markdown
Now if you change the value of the widget with `value` the display above is update. Example :
###Code
w.value = 'test'
###Output
_____no_output_____
###Markdown
Plus, you can change the type of `value` at any time when you modify it. And thanks to `Output` widget we can display so much type of information. To proove it, try the cell below!
###Code
from IPython.display import YouTubeVideo
w.value = YouTubeVideo('eWzY2nGfkXk')
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behaviour in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
thread.join()
###Output
_____no_output_____
###Markdown
Using the output widget with matplotlibYou can also use the output widget it self to build your own interaction with graphical outputs such as matplotlib. When using matplotlib, it is important that your update function clears the output widget state using the `wait=True` flag to prevent flickering of the display while interacting. A simple example:
###Code
import numpy as np
import ipywidgets as widgets
import matplotlib.pyplot as plt
out = widgets.Output(layout=widgets.Layout(height='300px'))
x = np.linspace(0,1,100)
def update_plot(w):
with out:
out.clear_output(wait=True)
plt.plot(x, x**p_widget.value)
plt.show()
p_widget = widgets.FloatSlider(min=0, max=2, step=0.1, value = 1)
update_plot([])
p_widget.observe(update_plot)
display(p_widget, out)
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behaviour in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
thread.join()
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
Note that `append_display_data` cannot currently be used to display widgets. The status of this bug is tracked in [this issue](https://github.com/jupyter-widgets/ipywidgets/issues/1811). We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behaviour in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
Note that `append_display_data` cannot currently be used to display widgets. The status of this bug is tracked in [this issue](https://github.com/jupyter-widgets/ipywidgets/issues/1811). We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behaviour in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
Note that `append_display_data` cannot currently be used to display widgets. The status of this bug is tracked in [this issue](https://github.com/jupyter-widgets/ipywidgets/issues/1811). We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behaviour in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
###Output
_____no_output_____
###Markdown
[Index](Index.ipynb) - [Back](Widget List.ipynb) - [Next](Widget Events.ipynb) Output widgets: leveraging Jupyter's display system
###Code
import ipywidgets as widgets
###Output
_____no_output_____
###Markdown
The `Output` widget can capture and display stdout, stderr and [rich output generated by IPython](http://ipython.readthedocs.io/en/stable/api/generated/IPython.display.htmlmodule-IPython.display). You can also append output directly to an output widget, or clear it programmatically.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out
###Output
_____no_output_____
###Markdown
After the widget is created, direct output to it using a context manager. You can print text to the output area:
###Code
with out:
for i in range(10):
print(i, 'Hello world!')
###Output
_____no_output_____
###Markdown
Rich output can also be directed to the output area. Anything which displays nicely in a Jupyter notebook will also display well in the `Output` widget.
###Code
from IPython.display import YouTubeVideo
with out:
display(YouTubeVideo('eWzY2nGfkXk'))
###Output
_____no_output_____
###Markdown
We can even display complex mimetypes, such as nested widgets, in an output widget.
###Code
with out:
display(widgets.IntSlider())
###Output
_____no_output_____
###Markdown
We can also append outputs to the output widget directly with the convenience methods `append_stdout`, `append_stderr`, or `append_display_data`.
###Code
out = widgets.Output(layout={'border': '1px solid black'})
out.append_stdout('Output appended with append_stdout')
out.append_display_data(YouTubeVideo('eWzY2nGfkXk'))
out
###Output
_____no_output_____
###Markdown
We can clear the output by either using `IPython.display.clear_output` within the context manager, or we can call the widget's `clear_output` method directly.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
`clear_output` supports the keyword argument `wait`. With this set to `True`, the widget contents are not cleared immediately. Instead, they are cleared the next time the widget receives something to display. This can be useful when replacing content in the output widget: it allows for smoother transitions by avoiding a jarring resize of the widget following the call to `clear_output`.Finally, we can use an output widget to capture all the output produced by a function using the `capture` decorator.
###Code
@out.capture()
def function_with_captured_output():
print('This goes into the output widget')
raise Exception('As does this')
function_with_captured_output()
###Output
_____no_output_____
###Markdown
`out.capture` supports the keyword argument `clear_output`. Setting this to `True` will clear the output widget every time the function is invoked, so that you only see the output of the last invocation. With `clear_output` set to `True`, you can also pass a `wait=True` argument to only clear the output once new output is available. Of course, you can also manually clear the output any time as well.
###Code
out.clear_output()
###Output
_____no_output_____
###Markdown
Output widgets as the foundation for interactThe output widget forms the basis of how interact and related methods are implemented. It can also be used by itself to create rich layouts with widgets and code output. One simple way to customize how an interact UI looks is to use the `interactive_output` function to hook controls up to a function whose output is captured in the returned output widget. In the next example, we stack the controls vertically and then put the output of the function to the right.
###Code
a = widgets.IntSlider(description='a')
b = widgets.IntSlider(description='b')
c = widgets.IntSlider(description='c')
def f(a, b, c):
print('{}*{}*{}={}'.format(a, b, c, a*b*c))
out = widgets.interactive_output(f, {'a': a, 'b': b, 'c': c})
widgets.HBox([widgets.VBox([a, b, c]), out])
###Output
_____no_output_____
###Markdown
Debugging errors in callbacks with the output widgetOn some platforms, like JupyterLab, output generated by widget callbacks (for instance, functions attached to the `.observe` method on widget traits, or to the `.on_click` method on button widgets) are not displayed anywhere. Even on other platforms, it is unclear what cell this output should appear in. This can make debugging errors in callback functions more challenging. An effective tool for accessing the output of widget callbacks is to decorate the callback with an output widget's capture method. You can then display the widget in a new cell to see the callback output.
###Code
debug_view = widgets.Output(layout={'border': '1px solid black'})
@debug_view.capture(clear_output=True)
def bad_callback(event):
print('This is about to explode')
return 1.0 / 0.0
button = widgets.Button(
description='click me to raise an exception',
layout={'width': '300px'}
)
button.on_click(bad_callback)
button
debug_view
###Output
_____no_output_____
###Markdown
Integrating output widgets with the logging moduleWhile using the `.capture` decorator works well for understanding and debugging single callbacks, it does not scale to larger applications. Typically, in larger applications, one might use the [logging](https://docs.python.org/3/library/logging.html) module to print information on the status of the program. However, in the case of widget applications, it is unclear where the logging output should go.A useful pattern is to create a custom [handler](https://docs.python.org/3/library/logging.htmlhandler-objects) that redirects logs to an output widget. The output widget can then be displayed in a new cell to monitor the application while it runs.
###Code
import ipywidgets as widgets
import logging
class OutputWidgetHandler(logging.Handler):
""" Custom logging handler sending logs to an output widget """
def __init__(self, *args, **kwargs):
super(OutputWidgetHandler, self).__init__(*args, **kwargs)
layout = {
'width': '100%',
'height': '160px',
'border': '1px solid black'
}
self.out = widgets.Output(layout=layout)
def emit(self, record):
""" Overload of logging.Handler method """
formatted_record = self.format(record)
new_output = {
'name': 'stdout',
'output_type': 'stream',
'text': formatted_record+'\n'
}
self.out.outputs = (new_output, ) + self.out.outputs
def show_logs(self):
""" Show the logs """
display(self.out)
def clear_logs(self):
""" Clear the current logs """
self.out.clear_output()
logger = logging.getLogger(__name__)
handler = OutputWidgetHandler()
handler.setFormatter(logging.Formatter('%(asctime)s - [%(levelname)s] %(message)s'))
logger.addHandler(handler)
logger.setLevel(logging.INFO)
handler.show_logs()
handler.clear_logs()
logger.info('Starting program')
try:
logger.info('About to try something dangerous...')
1.0/0.0
except Exception as e:
logger.exception('An error occurred!')
###Output
_____no_output_____
###Markdown
Interacting with output widgets from background threadsJupyter's `display` mechanism can be counter-intuitive when displaying output produced by background threads. A background thread's output is printed to whatever cell the main thread is currently writing to. To see this directly, create a thread that repeatedly prints to standard out:```pythonimport threadingimport timedef run(): for i in itertools.count(0): time.sleep(1) print('output from background {}'.format(i)) t = threading.Thread(target=run)t.start()```This always prints in the currently active cell, not the cell that started the background thread.This can lead to surprising behavior in output widgets. During the time in which output is captured by the output widget, *any* output generated in the notebook, regardless of thread, will go into the output widget.The best way to avoid surprises is to *never* use an output widget's context manager in a context where multiple threads generate output. Instead, we can pass an output widget to the function executing in a thread, and use `append_display_data()`, `append_stdout()`, or `append_stderr()` methods to append displayable output to the output widget.
###Code
import threading
from IPython.display import display, HTML
import ipywidgets as widgets
import time
def thread_func(something, out):
for i in range(1, 5):
time.sleep(0.3)
out.append_stdout('{} {} {}\n'.format(i, '**'*i, something))
out.append_display_data(HTML("<em>All done!</em>"))
display('Display in main thread')
out = widgets.Output()
# Now the key: the container is displayed (while empty) in the main thread
display(out)
thread = threading.Thread(
target=thread_func,
args=("some text", out))
thread.start()
thread.join()
###Output
_____no_output_____ |
Chapter05/02_community_detection_algorithms.ipynb | ###Markdown
Network Communities Detection In this notebook, we will explore some methods to perform a community detection using several algortihms. Before testing the algorithms, let us create a simple benchmark graph.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import networkx as nx
G = nx.barbell_graph(m1=10, m2=4)
###Output
_____no_output_____
###Markdown
Matrix Factorization We start by using some matrix factorization technique to extract the embeddings, which are visualized and then clustered traditional clustering algorithms.
###Code
from gem.embedding.hope import HOPE
gf = HOPE(d=4, beta=0.01)
gf.learn_embedding(G)
embeddings = gf.get_embedding()
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2)
emb2d = tsne.fit_transform(embeddings)
plt.plot(embeddings[:, 0], embeddings[:, 1], 'o', linewidth=0)
###Output
_____no_output_____
###Markdown
We start by using a GaussianMixture model to perform the clustering
###Code
from sklearn.mixture import GaussianMixture
gm = GaussianMixture(n_components=3, random_state=0) #.(embeddings)
labels = gm.fit_predict(embeddings)
colors = ["blue", "green", "red"]
nx.draw_spring(G, node_color=[colors[label] for label in labels])
###Output
_____no_output_____
###Markdown
Spectral Clustering We now perform a spectral clustering based on the adjacency matrix of the graph. It is worth noting that this clustering is not a mutually exclusive clustering and nodes may belong to more than one community
###Code
adj=np.array(nx.adjacency_matrix(G).todense())
from communities.algorithms import spectral_clustering
communities = spectral_clustering(adj, k=3)
###Output
_____no_output_____
###Markdown
In the next plot we highlight the nodes that belong to a community using the red color. The blue nodes do not belong to the given community
###Code
plt.figure(figsize=(20, 5))
for ith, community in enumerate(communities):
cols = ["red" if node in community else "blue" for node in G.nodes]
plt.subplot(1,3,ith+1)
plt.title(f"Community {ith}")
nx.draw_spring(G, node_color=cols)
###Output
_____no_output_____
###Markdown
The next command shows the node ids belonging to the different communities
###Code
communities
###Output
_____no_output_____
###Markdown
Non Negative Matrix Factorization Here, we again use matrix factorization, but now using the Non-Negative Matrix Factorization, and associating the clusters with the latent dimensions.
###Code
from sklearn.decomposition import NMF
nmf = NMF(n_components=2)
emb = nmf.fit_transform(adj)
plt.plot(emb[:, 0], emb[:, 1], 'o', linewidth=0)
###Output
_____no_output_____
###Markdown
By setting a threshold value of 0.01, we determine which nodes belong to the given community.
###Code
communities = [set(np.where(emb[:,ith]>0.01)[0]) for ith in range(2)]
plt.figure(figsize=(20, 5))
for ith, community in enumerate(communities):
cols = ["red" if node in community else "blue" for node in G.nodes]
plt.subplot(1,3,ith+1)
plt.title(f"Community {ith}")
nx.draw_spring(G, node_color=cols)
###Output
_____no_output_____
###Markdown
Although the example above does not show this, in general also this clustering method may be non-mutually exclusive, and nodes may belong to more than one community Louvain and Modularity Optimization Here, we use the Louvain method, which is one of the most popular methods for performing community detection, even on fairly large graphs. As described in the chapter, the Louvain method basically optimize the partitioning (it is a mutually exclusing community detection algorithm), identifying the one that maximize the modularity score, meaning that nodes belonging to the same community are very well connected among themself, and weakly connected to the other communities. **Louvain, unlike other community detection algorithms, does not require to specity the number of communities in advance and find the best, optimal number of communities.**
###Code
from communities.algorithms import louvain_method
communities = louvain_method(adj)
c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values
nx.draw_spring(G, node_color=c)
communities
###Output
_____no_output_____
###Markdown
Girvan Newman The Girvan–Newman algorithm detects communities by progressively removing edges from the original graph. The algorithm removes the “most valuable” edge, traditionally the edge with the highest betweenness centrality, at each step. As the graph breaks down into pieces, the tightly knit community structure is exposed and the result can be depicted as a dendrogram.**BE AWARE that because of the betweeness centrality computation, this method may not scale well on large graphs**
###Code
from communities.algorithms import girvan_newman
communities = girvan_newman(adj, n=2)
c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values
nx.draw_spring(G, node_color=c)
communities
###Output
_____no_output_____
###Markdown
Network Communities Detection In this notebook, we will explore some methods to perform a community detection using several algortihms. Before testing the algorithms, let us create a simple benchmark graph.
###Code
%matplotlib inline
from matplotlib import pyplot as plt
import numpy as np
import pandas as pd
import networkx as nx
G = nx.barbell_graph(m1=10, m2=4)
###Output
_____no_output_____
###Markdown
Matrix Factorization We start by using some matrix factorization technique to extract the embeddings, which are visualized and then clustered traditional clustering algorithms.
###Code
from gem.embedding.hope import HOPE
gf = HOPE(d=4, beta=0.01)
gf.learn_embedding(G)
embeddings = gf.get_embedding()
from sklearn.manifold import TSNE
tsne = TSNE(n_components=2)
emb2d = tsne.fit_transform(embeddings)
plt.plot(embeddings[:, 0], embeddings[:, 1], 'o', linewidth=0)
###Output
_____no_output_____
###Markdown
We start by using a GaussianMixture model to perform the clustering
###Code
from sklearn.mixture import GaussianMixture
gm = GaussianMixture(n_components=3, random_state=0) #.(embeddings)
labels = gm.fit_predict(embeddings)
colors = ["blue", "green", "red"]
nx.draw_spring(G, node_color=[colors[label] for label in labels])
###Output
_____no_output_____
###Markdown
Spectral Clustering We now perform a spectral clustering based on the adjacency matrix of the graph. It is worth noting that this clustering is not a mutually exclusive clustering and nodes may belong to more than one community
###Code
adj=np.array(nx.adjacency_matrix(G).todense())
from communities.algorithms import spectral_clustering
communities = spectral_clustering(adj, k=3)
###Output
_____no_output_____
###Markdown
In the next plot we highlight the nodes that belong to a community using the red color. The blue nodes do not belong to the given community
###Code
plt.figure(figsize=(20, 5))
for ith, community in enumerate(communities):
cols = ["red" if node in community else "blue" for node in G.nodes]
plt.subplot(1,3,ith+1)
plt.title(f"Community {ith}")
nx.draw_spring(G, node_color=cols)
###Output
_____no_output_____
###Markdown
The next command shows the node ids belonging to the different communities
###Code
communities
###Output
_____no_output_____
###Markdown
Non Negative Matrix Factorization Here, we again use matrix factorization, but now using the Non-Negative Matrix Factorization, and associating the clusters with the latent dimensions.
###Code
from sklearn.decomposition import NMF
nmf = NMF(n_components=2)
emb = nmf.fit_transform(adj)
plt.plot(emb[:, 0], emb[:, 1], 'o', linewidth=0)
###Output
_____no_output_____
###Markdown
By setting a threshold value of 0.01, we determine which nodes belong to the given community.
###Code
communities = [set(np.where(emb[:,ith]>0.01)[0]) for ith in range(2)]
plt.figure(figsize=(20, 5))
for ith, community in enumerate(communities):
cols = ["red" if node in community else "blue" for node in G.nodes]
plt.subplot(1,3,ith+1)
plt.title(f"Community {ith}")
nx.draw_spring(G, node_color=cols)
###Output
_____no_output_____
###Markdown
Although the example above does not show this, in general also this clustering method may be non-mutually exclusive, and nodes may belong to more than one community Louvain and Modularity Optimization Here, we use the Louvain method, which is one of the most popular methods for performing community detection, even on fairly large graphs. As described in the chapter, the Louvain method basically optimize the partitioning (it is a mutually exclusing community detection algorithm), identifying the one that maximize the modularity score, meaning that nodes belonging to the same community are very well connected among themself, and weakly connected to the other communities. **Louvain, unlike other community detection algorithms, does not require to specity the number of communities in advance and find the best, optimal number of communities.**
###Code
from communities.algorithms import louvain_method
communities = louvain_method(adj)
c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values
nx.draw_spring(G, node_color=c)
communities
###Output
_____no_output_____
###Markdown
Girvan Newman The Girvan–Newman algorithm detects communities by progressively removing edges from the original graph. The algorithm removes the “most valuable” edge, traditionally the edge with the highest betweenness centrality, at each step. As the graph breaks down into pieces, the tightly knit community structure is exposed and the result can be depicted as a dendrogram.**BE AWARE that because of the betweeness centrality computation, this method may not scale well on large graphs**
###Code
from communities.algorithms import girvan_newman
communities = girvan_newman(adj, n=2)
c = pd.Series({node: colors[ith] for ith, nodes in enumerate(communities) for node in nodes}).values
nx.draw_spring(G, node_color=c)
communities
###Output
_____no_output_____ |
Trainer-Collaboratories/Fine_Tuning/DensNet201/Fine_tuning_DensNet201(GAP_512_0,25).ipynb | ###Markdown
**Import Google Drive**
###Code
from google.colab import drive
drive.mount('/content/drive')
###Output
Drive already mounted at /content/drive; to attempt to forcibly remount, call drive.mount("/content/drive", force_remount=True).
###Markdown
**Import Library**
###Code
import glob
import numpy as np
import os
import shutil
np.random.seed(42)
from sklearn.preprocessing import LabelEncoder
import cv2
import tensorflow as tf
import keras
import shutil
import random
import warnings
import numpy as np
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
from sklearn.utils import class_weight
from sklearn.model_selection import train_test_split
from sklearn.metrics import confusion_matrix, cohen_kappa_score
###Output
Using TensorFlow backend.
/usr/local/lib/python3.6/dist-packages/statsmodels/tools/_testing.py:19: FutureWarning: pandas.util.testing is deprecated. Use the functions in the public API at pandas.testing instead.
import pandas.util.testing as tm
###Markdown
**Load Data**
###Code
os.chdir('/content/drive/My Drive/Colab Notebooks/DATA RD/')
Train = glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Train/*')
Val=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Validation/*')
Test=glob.glob('/content/drive/My Drive/Colab Notebooks/DATA RD/DATASETS/Data Split/Test/*')
import matplotlib.image as mpimg
for ima in Train[600:601]:
img=mpimg.imread(ima)
imgplot = plt.imshow(img)
plt.show()
###Output
_____no_output_____
###Markdown
**Data Preparation**
###Code
nrows = 224
ncolumns = 224
channels = 3
def read_and_process_image(list_of_images):
X = [] # images
y = [] # labels
for image in list_of_images:
X.append(cv2.resize(cv2.imread(image, cv2.IMREAD_COLOR), (nrows,ncolumns), interpolation=cv2.INTER_CUBIC)) #Read the image
#get the labels
if 'Normal' in image:
y.append(0)
elif 'Mild' in image:
y.append(1)
elif 'Moderate' in image:
y.append(2)
elif 'Severe' in image:
y.append(3)
return X, y
X_train, y_train = read_and_process_image(Train)
X_val, y_val = read_and_process_image(Val)
X_test, y_test = read_and_process_image(Test)
import seaborn as sns
import gc
gc.collect()
#Convert list to numpy array
X_train = np.array(X_train)
y_train= np.array(y_train)
X_val = np.array(X_val)
y_val= np.array(y_val)
X_test = np.array(X_test)
y_test= np.array(y_test)
print('Train:',X_train.shape,y_train.shape)
print('Val:',X_val.shape,y_val.shape)
print('Test',X_test.shape,y_test.shape)
sns.countplot(y_train)
plt.title('Total Data Training')
sns.countplot(y_val)
plt.title('Total Data Validasi')
sns.countplot(y_test)
plt.title('Total Data Test')
y_train_ohe = pd.get_dummies(y_train)
y_val_ohe=pd.get_dummies(y_val)
y_test_ohe=pd.get_dummies(y_test)
y_train_ohe.shape,y_val_ohe.shape,y_test_ohe.shape
###Output
_____no_output_____
###Markdown
**Model Parameters**
###Code
batch_size = 16
EPOCHS = 100
WARMUP_EPOCHS = 2
LEARNING_RATE = 0.001
WARMUP_LEARNING_RATE = 1e-3
HEIGHT = 224
WIDTH = 224
CANAL = 3
N_CLASSES = 4
ES_PATIENCE = 5
RLROP_PATIENCE = 3
DECAY_DROP = 0.5
###Output
_____no_output_____
###Markdown
**Data Generator**
###Code
train_datagen =tf.keras.preprocessing.image.ImageDataGenerator(
rotation_range=360,
horizontal_flip=True,
vertical_flip=True)
test_datagen=tf.keras.preprocessing.image.ImageDataGenerator()
train_generator = train_datagen.flow(X_train, y_train_ohe, batch_size=batch_size)
val_generator = test_datagen.flow(X_val, y_val_ohe, batch_size=batch_size)
test_generator = test_datagen.flow(X_test, y_test_ohe, batch_size=batch_size)
###Output
_____no_output_____
###Markdown
**Define Model**
###Code
IMG_SHAPE = (224, 224, 3)
base_model =tf.keras.applications.DenseNet201(weights='imagenet',
include_top=False,
input_shape=IMG_SHAPE)
x =tf.keras.layers.GlobalAveragePooling2D()(base_model.output)
x =tf.keras.layers.Dropout(0.25)(x)
x =tf.keras.layers.Dense(512, activation='relu')(x)
x =tf.keras.layers.Dropout(0.25)(x)
final_output =tf.keras.layers.Dense(N_CLASSES, activation='softmax', name='final_output')(x)
model =tf.keras.models.Model(inputs=base_model.inputs,outputs=final_output)
###Output
_____no_output_____
###Markdown
**Train Top Layers**
###Code
for layer in model.layers:
layer.trainable = False
for i in range(-5, 0):
model.layers[i].trainable = True
metric_list = ["accuracy"]
optimizer =tf.keras.optimizers.Adam(lr=WARMUP_LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
import time
start = time.time()
STEP_SIZE_TRAIN = train_generator.n//train_generator.batch_size
STEP_SIZE_VALID = val_generator.n//val_generator.batch_size
history_warmup = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=WARMUP_EPOCHS,
verbose=1).history
end = time.time()
print('Waktu Training:', end - start)
###Output
WARNING:tensorflow:From <ipython-input-17-42947d619a66>:13: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
Epoch 1/2
375/375 [==============================] - 66s 177ms/step - loss: 1.2371 - accuracy: 0.5538 - val_loss: 1.3810 - val_accuracy: 0.4798
Epoch 2/2
375/375 [==============================] - 63s 169ms/step - loss: 0.8866 - accuracy: 0.6118 - val_loss: 1.2830 - val_accuracy: 0.5813
Waktu Training: 140.9225776195526
###Markdown
**Train Fine Tuning**
###Code
for layer in model.layers:
layer.trainable = True
es =tf.keras.callbacks.EarlyStopping(monitor='val_loss', mode='min', patience=ES_PATIENCE, restore_best_weights=True, verbose=1)
rlrop =tf.keras.callbacks.ReduceLROnPlateau(monitor='val_loss', mode='min', patience=RLROP_PATIENCE, factor=DECAY_DROP, min_lr=1e-6, verbose=1)
callback_list = [es]
optimizer =tf.keras.optimizers.Adam(lr=LEARNING_RATE)
model.compile(optimizer=optimizer, loss="categorical_crossentropy", metrics=metric_list)
model.summary()
history_finetunning = model.fit_generator(generator=train_generator,
steps_per_epoch=STEP_SIZE_TRAIN,
validation_data=val_generator,
validation_steps=STEP_SIZE_VALID,
epochs=EPOCHS,
callbacks=callback_list,
verbose=1).history
###Output
Epoch 1/100
375/375 [==============================] - 77s 205ms/step - loss: 0.8242 - accuracy: 0.6555 - val_loss: 0.9926 - val_accuracy: 0.6223
Epoch 2/100
375/375 [==============================] - 74s 198ms/step - loss: 0.6742 - accuracy: 0.7260 - val_loss: 0.9592 - val_accuracy: 0.6720
Epoch 3/100
375/375 [==============================] - 74s 197ms/step - loss: 0.6031 - accuracy: 0.7547 - val_loss: 1.0853 - val_accuracy: 0.5134
Epoch 4/100
375/375 [==============================] - 74s 197ms/step - loss: 0.5632 - accuracy: 0.7753 - val_loss: 3.8420 - val_accuracy: 0.4187
Epoch 5/100
375/375 [==============================] - 74s 198ms/step - loss: 0.5639 - accuracy: 0.7715 - val_loss: 0.8801 - val_accuracy: 0.6344
Epoch 6/100
375/375 [==============================] - 74s 197ms/step - loss: 0.5317 - accuracy: 0.7875 - val_loss: 0.9637 - val_accuracy: 0.6270
Epoch 7/100
375/375 [==============================] - 74s 197ms/step - loss: 0.5236 - accuracy: 0.7875 - val_loss: 2.7267 - val_accuracy: 0.6270
Epoch 8/100
375/375 [==============================] - 74s 197ms/step - loss: 0.5059 - accuracy: 0.7987 - val_loss: 1.4677 - val_accuracy: 0.6956
Epoch 9/100
375/375 [==============================] - 74s 197ms/step - loss: 0.5076 - accuracy: 0.7967 - val_loss: 1.2695 - val_accuracy: 0.5141
Epoch 10/100
375/375 [==============================] - ETA: 0s - loss: 0.4855 - accuracy: 0.7983Restoring model weights from the end of the best epoch.
375/375 [==============================] - 74s 197ms/step - loss: 0.4855 - accuracy: 0.7983 - val_loss: 1.7154 - val_accuracy: 0.4839
Epoch 00010: early stopping
###Markdown
**Model Graph**
###Code
history = {'loss': history_warmup['loss'] + history_finetunning['loss'],
'val_loss': history_warmup['val_loss'] + history_finetunning['val_loss'],
'acc': history_warmup['accuracy'] + history_finetunning['accuracy'],
'val_acc': history_warmup['val_accuracy'] + history_finetunning['val_accuracy']}
sns.set_style("whitegrid")
fig, (ax1, ax2) = plt.subplots(2, 1, sharex='col', figsize=(20, 18))
ax1.plot(history['loss'], label='Train loss')
ax1.plot(history['val_loss'], label='Validation loss')
ax1.legend(loc='best')
ax1.set_title('Loss')
ax2.plot(history['acc'], label='Train accuracy')
ax2.plot(history['val_acc'], label='Validation accuracy')
ax2.legend(loc='best')
ax2.set_title('Accuracy')
plt.xlabel('Epochs')
sns.despine()
plt.show()
###Output
_____no_output_____
###Markdown
**Evaluate Model**
###Code
loss_Val, acc_Val = model.evaluate(X_val, y_val_ohe,batch_size=1, verbose=1)
print("Validation: accuracy = %f ; loss_v = %f" % (acc_Val, loss_Val))
lastFullTrainPred = np.empty((0, N_CLASSES))
lastFullTrainLabels = np.empty((0, N_CLASSES))
lastFullValPred = np.empty((0, N_CLASSES))
lastFullValLabels = np.empty((0, N_CLASSES))
for i in range(STEP_SIZE_TRAIN+1):
im, lbl = next(train_generator)
scores = model.predict(im, batch_size=train_generator.batch_size)
lastFullTrainPred = np.append(lastFullTrainPred, scores, axis=0)
lastFullTrainLabels = np.append(lastFullTrainLabels, lbl, axis=0)
for i in range(STEP_SIZE_VALID+1):
im, lbl = next(val_generator)
scores = model.predict(im, batch_size=val_generator.batch_size)
lastFullValPred = np.append(lastFullValPred, scores, axis=0)
lastFullValLabels = np.append(lastFullValLabels, lbl, axis=0)
lastFullComPred = np.concatenate((lastFullTrainPred, lastFullValPred))
lastFullComLabels = np.concatenate((lastFullTrainLabels, lastFullValLabels))
complete_labels = [np.argmax(label) for label in lastFullComLabels]
train_preds = [np.argmax(pred) for pred in lastFullTrainPred]
train_labels = [np.argmax(label) for label in lastFullTrainLabels]
validation_preds = [np.argmax(pred) for pred in lastFullValPred]
validation_labels = [np.argmax(label) for label in lastFullValLabels]
fig, (ax1, ax2) = plt.subplots(1, 2, sharex='col', figsize=(24, 7))
labels = ['0 - No DR', '1 - Mild', '2 - Moderate', '3 - Severe']
train_cnf_matrix = confusion_matrix(train_labels, train_preds)
validation_cnf_matrix = confusion_matrix(validation_labels, validation_preds)
train_cnf_matrix_norm = train_cnf_matrix.astype('float') / train_cnf_matrix.sum(axis=1)[:, np.newaxis]
validation_cnf_matrix_norm = validation_cnf_matrix.astype('float') / validation_cnf_matrix.sum(axis=1)[:, np.newaxis]
train_df_cm = pd.DataFrame(train_cnf_matrix_norm, index=labels, columns=labels)
validation_df_cm = pd.DataFrame(validation_cnf_matrix_norm, index=labels, columns=labels)
sns.heatmap(train_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax1).set_title('Train')
sns.heatmap(validation_df_cm, annot=True, fmt='.2f', cmap="Blues",ax=ax2).set_title('Validation')
plt.show()
###Output
_____no_output_____ |
Spectral_Graph_Bipartitioning.ipynb | ###Markdown
###Code
from sklearn.cluster import SpectralClustering
from sklearn.datasets import make_blobs
import matplotlib.pyplot as plt
from numpy import random
random.seed(1)
x, _ = make_blobs(n_samples=400, centers=4, cluster_std=1.5)
plt.scatter(x[:,0], x[:,1])
plt.show()
SpectralClustering(affinity='rbf', assign_labels='kmeans', coef0=1, degree=3, eigen_solver=None, eigen_tol=0.0, gamma=1.0, kernel_params=None, n_clusters=4, n_components=None, n_init=10, n_jobs=None, n_neighbors=10, random_state=None )
labels = sc.labels_
plt.scatter(x[:,0], x[:,1], c=labels)
plt.show()
f = plt.figure()
f.add_subplot(2, 2, 1)
for i in range(2, 6):
sc = SpectralClustering(n_clusters=i).fit(x)
f.add_subplot(2, 2, i-1)
plt.scatter(x[:,0], x[:,1], s=5, c=sc.labels_, label="n_cluster-"+str(i))
plt.legend()
plt.show()
###Output
/usr/local/lib/python3.7/dist-packages/ipykernel_launcher.py:5: MatplotlibDeprecationWarning: Adding an axes using the same arguments as a previous axes currently reuses the earlier instance. In a future version, a new instance will always be created and returned. Meanwhile, this warning can be suppressed, and the future behavior ensured, by passing a unique label to each axes instance.
"""
|
_notebooks/2020-10-22-stochastic-gradient-descent.ipynb | ###Markdown
Stochastic gradient descent From the [Data Science from Scratch book](https://www.oreilly.com/library/view/data-science-from/9781492041122/). Libraries and helper functions
###Code
from typing import List
import random
Vector = List[float]
def add(vector1: Vector, vector2: Vector) -> Vector:
assert len(vector1) == len(vector2)
return [v1 + v2 for v1, v2 in zip(vector1, vector2)]
def scalar_multiply(c: float, vector: Vector) -> Vector:
return [c * v for v in vector]
def gradient_step(v: Vector, gradient: Vector, step_size: float) -> Vector:
"""Return vector adjusted with step. Step is gradient times step size.
"""
step = scalar_multiply(step_size, gradient)
return add(v, step)
def linear_gradient(x: float, y: float, theta: Vector) -> Vector:
slope, intercept = theta
predicted = slope * x + intercept
error = (predicted - y) #** 2
# print(x, y, theta, predicted, error)
return [2 * error * x, 2 * error]
###Output
_____no_output_____
###Markdown
Stochastic gradients Here we use one training example at a time to calculate the gradient steps
###Code
inputs = [(x, 20 * x + 5) for x in range(-50, 50)]
theta = [random.uniform(-1, 1), random.uniform(-1, 1)]
learning_rate = 0.001
for epoch in range(100):
for x, y in inputs:
grad = linear_gradient(x, y, theta)
theta = gradient_step(theta, grad, -learning_rate)
print(epoch, theta)
###Output
0 [20.108274621088928, -0.3890550572184463]
1 [20.103628550173042, -0.15784430337372637]
2 [20.09918250047512, 0.06344662581205483]
3 [20.094927182760102, 0.2752433412342787]
4 [20.090854449810823, 0.47795318030884215]
5 [20.086956448727392, 0.6719660044610173]
6 [20.0832257045743, 0.8576549486610185]
7 [20.07965500742264, 1.0353771386943718]
8 [20.076237509713653, 1.2054743780282082]
9 [20.07296662801438, 1.3682738056445862]
10 [20.06983608174483, 1.5240885250583422]
11 [20.06683986242782, 1.6732182068542554]
12 [20.063972181799524, 1.8159496643823287]
13 [20.06122752813499, 1.9525574051756995]
14 [20.05860064833006, 2.083304159961789]
15 [20.056086446554076, 2.2084413869124764]
16 [20.05368014168373, 2.328209755991209]
17 [20.051377044125662, 2.442839611270341]
18 [20.049172772009843, 2.552551414011119]
19 [20.047063092087537, 2.6575561678668613]
20 [20.045043898683737, 2.758055822780714]
21 [20.04311134626141, 2.8542436641396707]
22 [20.04126169735246, 2.9463046849793786]
23 [20.039491436402585, 3.0344159418586236]
24 [20.037797100119768, 3.118746894557255]
25 [20.0361754521855, 3.19945973173537]
26 [20.034623390938133, 3.276709684335131]
27 [20.03313793652982, 3.3506453237233305]
28 [20.03171617359179, 3.421408845892839]
29 [20.030355435914238, 3.4891363463999814]
30 [20.02905307411413, 3.5539580824049044]
31 [20.02780658595349, 3.6159987219880825]
32 [20.026613585687915, 3.6753775847836647]
33 [20.025471758004617, 3.732208870958282]
34 [20.024378926653775, 3.7866018810682345]
35 [20.023332973880006, 3.8386612263190933]
36 [20.022331891127603, 3.8884870294569893]
37 [20.021373782454166, 3.936175118240505]
38 [20.020456760427248, 3.981817208697045]
39 [20.019579095445714, 4.0255010817522585]
40 [20.018739087243826, 4.067310752589061]
41 [20.017935091021968, 4.107326630985043]
42 [20.017165620461952, 4.14562567751338]
43 [20.016429140217177, 4.182281550851487]
44 [20.015724268710606, 4.217364749099938]
45 [20.015049636287394, 4.250942746103223]
46 [20.014403952714495, 4.283080120704653]
47 [20.01378597502875, 4.313838681185705]
48 [20.013194514460565, 4.343277583993619]
49 [20.012628414045615, 4.371453447141006]
50 [20.01208658825722, 4.398420459215999]
51 [20.011568035372246, 4.424230484791519]
52 [20.011071730850883, 4.448933163459061]
53 [20.010596714895236, 4.472576004466116]
54 [20.01014208044218, 4.495204478772731]
55 [20.009706939528638, 4.5168621063046]
56 [20.009290475161325, 4.5375905399679235]
57 [20.008891875158, 4.557429645750311]
58 [20.008510370231065, 4.576417578971538]
59 [20.00814524887334, 4.594590858346274]
60 [20.007795789585977, 4.611984435823321]
61 [20.00746133605139, 4.628631763741193]
62 [20.00714119205082, 4.644564858428533]
63 [20.00683481657478, 4.659814363112007]
64 [20.00654157694565, 4.674409606833622]
65 [20.00626093700885, 4.688378660053233]
66 [20.0059923009138, 4.701748388289354]
67 [20.005735205576336, 4.714544504461379]
68 [20.00548916033653, 4.726791619391082]
69 [20.00525365468124, 4.738513287317169]
70 [20.005028243503418, 4.749732051371429]
71 [20.004812513452155, 4.760469488056405]
72 [20.0046060409254, 4.770746248364355]
73 [20.004408419316054, 4.780582096925131]
74 [20.004219291985194, 4.789995950689194]
75 [20.004038256610244, 4.799005914654827]
76 [20.00386500754045, 4.807629317187821]
77 [20.003699183947933, 4.815882743439104]
78 [20.00354046799362, 4.823782066495483]
79 [20.003388552379008, 4.831342478409528]
80 [20.003243202425267, 4.838578520565049]
81 [20.003104049179598, 4.8455041097305935]
82 [20.002970873552083, 4.852132564899691]
83 [20.0028434121415, 4.858476634395222]
84 [20.00272141143189, 4.864548519281559]
85 [20.002604655271004, 4.870359897369896]
86 [20.002492920554257, 4.875921945826025]
87 [20.00238595286676, 4.881245361497983]
88 [20.002283589604943, 4.886340382432449]
89 [20.0021855928701, 4.8912168074015]
90 [20.002091842376988, 4.8958840153869385]
91 [20.002002090950178, 4.90035098289899]
92 [20.001916203724086, 4.904626300833264]
93 [20.001833987421463, 4.908718191643037]
94 [20.00175530458321, 4.912634524898607]
95 [20.001680007220763, 4.916382832992619]
96 [20.001607906672934, 4.919970324321837]
97 [20.001538916328176, 4.923403898248704]
98 [20.001472898762803, 4.926690159008146]
99 [20.001409710601003, 4.929835427066589]
|
_sequential/Deep Learning Sequential/Week 1/Building a Recurrent Neural Network/building_rnn_step_by_step_numpy.ipynb | ###Markdown
Building your Recurrent Neural Network - Step by StepWelcome to Course 5's first assignment! In this assignment, you will implement your first Recurrent Neural Network in numpy.Recurrent Neural Networks (RNN) are very effective for Natural Language Processing and other sequence tasks because they have "memory". They can read inputs $x^{\langle t \rangle}$ (such as words) one at a time, and remember some information/context through the hidden layer activations that get passed from one time-step to the next. This allows a uni-directional RNN to take information from the past to process later inputs. A bidirection RNN can take context from both the past and the future. **Notation**:- Superscript $[l]$ denotes an object associated with the $l^{th}$ layer. - Example: $a^{[4]}$ is the $4^{th}$ layer activation. $W^{[5]}$ and $b^{[5]}$ are the $5^{th}$ layer parameters.- Superscript $(i)$ denotes an object associated with the $i^{th}$ example. - Example: $x^{(i)}$ is the $i^{th}$ training example input.- Superscript $\langle t \rangle$ denotes an object at the $t^{th}$ time-step. - Example: $x^{\langle t \rangle}$ is the input x at the $t^{th}$ time-step. $x^{(i)\langle t \rangle}$ is the input at the $t^{th}$ timestep of example $i$. - Lowerscript $i$ denotes the $i^{th}$ entry of a vector. - Example: $a^{[l]}_i$ denotes the $i^{th}$ entry of the activations in layer $l$.We assume that you are already familiar with `numpy` and/or have completed the previous courses of the specialization. Let's get started! Let's first import all the packages that you will need during this assignment.
###Code
import numpy as np
from rnn_utils import *
###Output
_____no_output_____
###Markdown
1 - Forward propagation for the basic Recurrent Neural NetworkLater this week, you will generate music using an RNN. The basic RNN that you will implement has the structure below. In this example, $T_x = T_y$. **Figure 1**: Basic RNN model Here's how you can implement an RNN: **Steps**:1. Implement the calculations needed for one time-step of the RNN.2. Implement a loop over $T_x$ time-steps in order to process all the inputs, one at a time. Let's go! 1.1 - RNN cellA Recurrent neural network can be seen as the repetition of a single cell. You are first going to implement the computations for a single time-step. The following figure describes the operations for a single time-step of an RNN cell. **Figure 2**: Basic RNN cell. Takes as input $x^{\langle t \rangle}$ (current input) and $a^{\langle t - 1\rangle}$ (previous hidden state containing information from the past), and outputs $a^{\langle t \rangle}$ which is given to the next RNN cell and also used to predict $y^{\langle t \rangle}$ **Exercise**: Implement the RNN-cell described in Figure (2).**Instructions**:1. Compute the hidden state with tanh activation: $a^{\langle t \rangle} = \tanh(W_{aa} a^{\langle t-1 \rangle} + W_{ax} x^{\langle t \rangle} + b_a)$.2. Using your new hidden state $a^{\langle t \rangle}$, compute the prediction $\hat{y}^{\langle t \rangle} = softmax(W_{ya} a^{\langle t \rangle} + b_y)$. We provided you a function: `softmax`.3. Store $(a^{\langle t \rangle}, a^{\langle t-1 \rangle}, x^{\langle t \rangle}, parameters)$ in cache4. Return $a^{\langle t \rangle}$ , $y^{\langle t \rangle}$ and cacheWe will vectorize over $m$ examples. Thus, $x^{\langle t \rangle}$ will have dimension $(n_x,m)$, and $a^{\langle t \rangle}$ will have dimension $(n_a,m)$.
###Code
# GRADED FUNCTION: rnn_cell_forward
def rnn_cell_forward(xt, a_prev, parameters):
"""
Implements a single forward step of the RNN-cell as described in Figure (2)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias, numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, a_prev, xt, parameters)
"""
# Retrieve parameters from "parameters"
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ### (≈2 lines)
# compute next activation state using the formula given above
a_next = np.tanh(Wax @ xt + Waa @ a_prev + ba)
# compute output of the current cell using the formula given above
yt_pred = softmax(Wya @ a_next + by)
### END CODE HERE ###
# store values you need for backward propagation in cache
cache = (a_next, a_prev, xt, parameters)
return a_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a_next, yt_pred, cache = rnn_cell_forward(xt, a_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", a_next.shape)
print("yt_pred[1] =", yt_pred[1])
print("yt_pred.shape = ", yt_pred.shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **a_next[4]**: [ 0.59584544 0.18141802 0.61311866 0.99808218 0.85016201 0.99980978 -0.18887155 0.99815551 0.6531151 0.82872037] **a_next.shape**: (5, 10) **yt[1]**: [ 0.9888161 0.01682021 0.21140899 0.36817467 0.98988387 0.88945212 0.36920224 0.9966312 0.9982559 0.17746526] **yt.shape**: (2, 10) 1.2 - RNN forward pass You can see an RNN as the repetition of the cell you've just built. If your input sequence of data is carried over 10 time steps, then you will copy the RNN cell 10 times. Each cell takes as input the hidden state from the previous cell ($a^{\langle t-1 \rangle}$) and the current time-step's input data ($x^{\langle t \rangle}$). It outputs a hidden state ($a^{\langle t \rangle}$) and a prediction ($y^{\langle t \rangle}$) for this time-step. **Figure 3**: Basic RNN. The input sequence $x = (x^{\langle 1 \rangle}, x^{\langle 2 \rangle}, ..., x^{\langle T_x \rangle})$ is carried over $T_x$ time steps. The network outputs $y = (y^{\langle 1 \rangle}, y^{\langle 2 \rangle}, ..., y^{\langle T_x \rangle})$. **Exercise**: Code the forward propagation of the RNN described in Figure (3).**Instructions**:1. Create a vector of zeros ($a$) that will store all the hidden states computed by the RNN.2. Initialize the "next" hidden state as $a_0$ (initial hidden state).3. Start looping over each time step, your incremental index is $t$ : - Update the "next" hidden state and the cache by running `rnn_step_forward` - Store the "next" hidden state in $a$ ($t^{th}$ position) - Store the prediction in y - Add the cache to the list of caches4. Return $a$, $y$ and caches
###Code
# GRADED FUNCTION: rnn_forward
def rnn_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Waa -- Weight matrix multiplying the hidden state, numpy array of shape (n_a, n_a)
Wax -- Weight matrix multiplying the input, numpy array of shape (n_a, n_x)
Wya -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
ba -- Bias numpy array of shape (n_a, 1)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y_pred -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of caches, x)
"""
# Initialize "caches" which will contain the list of all caches
caches = []
# Retrieve dimensions from shapes of x and Wy
n_x, m, T_x = x.shape
n_y, n_a = parameters["Wya"].shape
### START CODE HERE ###
# initialize "a" and "y" with zeros (≈2 lines)
a = np.zeros((n_a, m, T_x))
y_pred = np.zeros((n_y, m, T_x))
# Initialize a_next (≈1 line)
a_next = a0[:,:,None]
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, compute the prediction, get the cache (≈1 line)
# print(x.shape, x[:,:,t].shape, a.shape, a.shape, y_pred.shape, y_pred[:,:,t].shape)
a_next, yt_pred, cache = rnn_cell_forward(x[:,:,t], a[:,:,t], parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y_pred[:,:,t] = yt_pred
# Append "cache" to "caches" (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y_pred, caches
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Waa = np.random.randn(5,5)
Wax = np.random.randn(5,3)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Waa": Waa, "Wax": Wax, "Wya": Wya, "ba": ba, "by": by}
a, y_pred, caches = rnn_forward(x, a0, parameters)
print("a[4][1] = ", a[4][1])
print("a.shape = ", a.shape)
print("y_pred[1][3] =", y_pred[1][3])
print("y_pred.shape = ", y_pred.shape)
print("caches[1][1][3] =", caches[1][1][3])
print("len(caches) = ", len(caches))
###Output
_____no_output_____
###Markdown
**Expected Output**: **a[4][1]**: [-0.99999375 0.77911235 -0.99861469 -0.99833267] **a.shape**: (5, 10, 4) **y[1][3]**: [ 0.79560373 0.86224861 0.11118257 0.81515947] **y.shape**: (2, 10, 4) **cache[1][1][3]**: [-1.1425182 -0.34934272 -0.20889423 0.58662319] **len(cache)**: 2 Congratulations! You've successfully built the forward propagation of a recurrent neural network from scratch. This will work well enough for some applications, but it suffers from vanishing gradient problems. So it works best when each output $y^{\langle t \rangle}$ can be estimated using mainly "local" context (meaning information from inputs $x^{\langle t' \rangle}$ where $t'$ is not too far from $t$). In the next part, you will build a more complex LSTM model, which is better at addressing vanishing gradients. The LSTM will be better able to remember a piece of information and keep it saved for many timesteps. 2 - Long Short-Term Memory (LSTM) networkThis following figure shows the operations of an LSTM-cell. **Figure 4**: LSTM-cell. This tracks and updates a "cell state" or memory variable $c^{\langle t \rangle}$ at every time-step, which can be different from $a^{\langle t \rangle}$. Similar to the RNN example above, you will start by implementing the LSTM cell for a single time-step. Then you can iteratively call it from inside a for-loop to have it process an input with $T_x$ time-steps. About the gates - Forget gateFor the sake of this illustration, lets assume we are reading words in a piece of text, and want use an LSTM to keep track of grammatical structures, such as whether the subject is singular or plural. If the subject changes from a singular word to a plural word, we need to find a way to get rid of our previously stored memory value of the singular/plural state. In an LSTM, the forget gate lets us do this: $$\Gamma_f^{\langle t \rangle} = \sigma(W_f[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_f)\tag{1} $$Here, $W_f$ are weights that govern the forget gate's behavior. We concatenate $[a^{\langle t-1 \rangle}, x^{\langle t \rangle}]$ and multiply by $W_f$. The equation above results in a vector $\Gamma_f^{\langle t \rangle}$ with values between 0 and 1. This forget gate vector will be multiplied element-wise by the previous cell state $c^{\langle t-1 \rangle}$. So if one of the values of $\Gamma_f^{\langle t \rangle}$ is 0 (or close to 0) then it means that the LSTM should remove that piece of information (e.g. the singular subject) in the corresponding component of $c^{\langle t-1 \rangle}$. If one of the values is 1, then it will keep the information. - Update gateOnce we forget that the subject being discussed is singular, we need to find a way to update it to reflect that the new subject is now plural. Here is the formulat for the update gate: $$\Gamma_u^{\langle t \rangle} = \sigma(W_u[a^{\langle t-1 \rangle}, x^{\{t\}}] + b_u)\tag{2} $$ Similar to the forget gate, here $\Gamma_u^{\langle t \rangle}$ is again a vector of values between 0 and 1. This will be multiplied element-wise with $\tilde{c}^{\langle t \rangle}$, in order to compute $c^{\langle t \rangle}$. - Updating the cell To update the new subject we need to create a new vector of numbers that we can add to our previous cell state. The equation we use is: $$ \tilde{c}^{\langle t \rangle} = \tanh(W_c[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_c)\tag{3} $$Finally, the new cell state is: $$ c^{\langle t \rangle} = \Gamma_f^{\langle t \rangle}* c^{\langle t-1 \rangle} + \Gamma_u^{\langle t \rangle} *\tilde{c}^{\langle t \rangle} \tag{4} $$ - Output gateTo decide which outputs we will use, we will use the following two formulas: $$ \Gamma_o^{\langle t \rangle}= \sigma(W_o[a^{\langle t-1 \rangle}, x^{\langle t \rangle}] + b_o)\tag{5}$$ $$ a^{\langle t \rangle} = \Gamma_o^{\langle t \rangle}* \tanh(c^{\langle t \rangle})\tag{6} $$Where in equation 5 you decide what to output using a sigmoid function and in equation 6 you multiply that by the $\tanh$ of the previous state. 2.1 - LSTM cell**Exercise**: Implement the LSTM cell described in the Figure (3).**Instructions**:1. Concatenate $a^{\langle t-1 \rangle}$ and $x^{\langle t \rangle}$ in a single matrix: $concat = \begin{bmatrix} a^{\langle t-1 \rangle} \\ x^{\langle t \rangle} \end{bmatrix}$2. Compute all the formulas 2-6. You can use `sigmoid()` (provided) and `np.tanh()`.3. Compute the prediction $y^{\langle t \rangle}$. You can use `softmax()` (provided).
###Code
# GRADED FUNCTION: lstm_cell_forward
def lstm_cell_forward(xt, a_prev, c_prev, parameters):
"""
Implement a single forward step of the LSTM-cell as described in Figure (4)
Arguments:
xt -- your input data at timestep "t", numpy array of shape (n_x, m).
a_prev -- Hidden state at timestep "t-1", numpy array of shape (n_a, m)
c_prev -- Memory state at timestep "t-1", numpy array of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the save gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the focus gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a_next -- next hidden state, of shape (n_a, m)
c_next -- next memory state, of shape (n_a, m)
yt_pred -- prediction at timestep "t", numpy array of shape (n_y, m)
cache -- tuple of values needed for the backward pass, contains (a_next, c_next, a_prev, c_prev, xt, parameters)
Note: ft/it/ot stand for the forget/update/output gates, cct stands for the candidate value (c tilda),
c stands for the memory value
"""
# Retrieve parameters from "parameters"
Wf = parameters["Wf"]
bf = parameters["bf"]
Wi = parameters["Wi"]
bi = parameters["bi"]
Wc = parameters["Wc"]
bc = parameters["bc"]
Wo = parameters["Wo"]
bo = parameters["bo"]
Wy = parameters["Wy"]
by = parameters["by"]
# Retrieve dimensions from shapes of xt and Wy
n_x, m = xt.shape
n_y, n_a = Wy.shape
### START CODE HERE ###
# Concatenate a_prev and xt (≈3 lines)
concat = np.vstack((a_prev, xt))
# concat[: n_a, :] = a_prev # why??
# concat[n_a :, :] = xt
# Compute values for ft, it, cct, c_next, ot, a_next using the formulas given figure (4) (≈6 lines)
ft = sigmoid(Wf @ concat + bf)
it = sigmoid(Wi @ concat + bi)
cct = np.tanh(Wc @ concat + bc)
c_next = ft * c_prev + it * cct
ot = sigmoid(Wo @ concat + bo)
a_next = ot * np.tanh(c_next)
# Compute prediction of the LSTM cell (≈1 line)
yt_pred = softmax(Wy @ a_next + by)
### END CODE HERE ###
# store values needed for backward propagation in cache
cache = (a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters)
return a_next, c_next, yt_pred, cache
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
print("a_next[4] = ", a_next[4])
print("a_next.shape = ", c_next.shape)
print("c_next[2] = ", c_next[2])
print("c_next.shape = ", c_next.shape)
print("yt[1] =", yt[1])
print("yt.shape = ", yt.shape)
print("cache[1][3] =", cache[1][3])
print("len(cache) = ", len(cache))
###Output
_____no_output_____
###Markdown
**Expected Output**: **a_next[4]**: [-0.66408471 0.0036921 0.02088357 0.22834167 -0.85575339 0.00138482 0.76566531 0.34631421 -0.00215674 0.43827275] **a_next.shape**: (5, 10) **c_next[2]**: [ 0.63267805 1.00570849 0.35504474 0.20690913 -1.64566718 0.11832942 0.76449811 -0.0981561 -0.74348425 -0.26810932] **c_next.shape**: (5, 10) **yt[1]**: [ 0.79913913 0.15986619 0.22412122 0.15606108 0.97057211 0.31146381 0.00943007 0.12666353 0.39380172 0.07828381] **yt.shape**: (2, 10) **cache[1][3]**: [-0.16263996 1.03729328 0.72938082 -0.54101719 0.02752074 -0.30821874 0.07651101 -1.03752894 1.41219977 -0.37647422] **len(cache)**: 10 2.2 - Forward pass for LSTMNow that you have implemented one step of an LSTM, you can now iterate this over this using a for-loop to process a sequence of $T_x$ inputs. **Figure 4**: LSTM over multiple time-steps. **Exercise:** Implement `lstm_forward()` to run an LSTM over $T_x$ time-steps. **Note**: $c^{\langle 0 \rangle}$ is initialized with zeros.
###Code
# GRADED FUNCTION: lstm_forward
def lstm_forward(x, a0, parameters):
"""
Implement the forward propagation of the recurrent neural network using an LSTM-cell described in Figure (3).
Arguments:
x -- Input data for every time-step, of shape (n_x, m, T_x).
a0 -- Initial hidden state, of shape (n_a, m)
parameters -- python dictionary containing:
Wf -- Weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
bf -- Bias of the forget gate, numpy array of shape (n_a, 1)
Wi -- Weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
bi -- Bias of the save gate, numpy array of shape (n_a, 1)
Wc -- Weight matrix of the first "tanh", numpy array of shape (n_a, n_a + n_x)
bc -- Bias of the first "tanh", numpy array of shape (n_a, 1)
Wo -- Weight matrix of the focus gate, numpy array of shape (n_a, n_a + n_x)
bo -- Bias of the focus gate, numpy array of shape (n_a, 1)
Wy -- Weight matrix relating the hidden-state to the output, numpy array of shape (n_y, n_a)
by -- Bias relating the hidden-state to the output, numpy array of shape (n_y, 1)
Returns:
a -- Hidden states for every time-step, numpy array of shape (n_a, m, T_x)
y -- Predictions for every time-step, numpy array of shape (n_y, m, T_x)
caches -- tuple of values needed for the backward pass, contains (list of all the caches, x)
"""
# Initialize "caches", which will track the list of all the caches
caches = []
### START CODE HERE ###
# Retrieve dimensions from shapes of xt and Wy (≈2 lines)
n_x, m, T_x = x.shape
n_y, n_a = parameters['Wy'].shape
# initialize "a", "c" and "y" with zeros (≈3 lines)
a = np.zeros((n_a, m, T_x))
c = np.zeros_like(a)
y = np.zeros((n_y, m, T_x))
# Initialize a_next and c_next (≈2 lines)
a_next = a0
c_next = np.zeros_like(a0)
# loop over all time-steps
for t in range(T_x):
# Update next hidden state, next memory state, compute the prediction, get the cache (≈1 line)
# print('wtf', x[:,:,t].shape, a_next[:,:,t].shape, c_next[:,:,t].shape, c_next.shape)
a_next, c_next, yt, cache = lstm_cell_forward(x[:,:,t], a_next, c_next, parameters)
# Save the value of the new "next" hidden state in a (≈1 line)
a[:,:,t] = a_next
# Save the value of the prediction in y (≈1 line)
y[:,:,t] = yt
# Save the value of the next cell state (≈1 line)
c[:,:,t] = c_next
# Append the cache into caches (≈1 line)
caches.append(cache)
### END CODE HERE ###
# store values needed for backward propagation in cache
caches = (caches, x)
return a, y, c, caches
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
print("a[4][3][6] = ", a[4][3][6])
print("a.shape = ", a.shape)
print("y[1][4][3] =", y[1][4][3])
print("y.shape = ", y.shape)
print("caches[1][1[1]] =", caches[1][1][1])
print("c[1][2][1]", c[1][2][1])
print("len(caches) = ", len(caches))
###Output
_____no_output_____
###Markdown
**Expected Output**: **a[4][3][6]** = 0.172117767533 **a.shape** = (5, 10, 7) **y[1][4][3]** = 0.95087346185 **y.shape** = (2, 10, 7) **caches[1][1][1]** = [ 0.82797464 0.23009474 0.76201118 -0.22232814 -0.20075807 0.18656139 0.41005165] **c[1][2][1]** = -0.855544916718 **len(caches)** = 2 Congratulations! You have now implemented the forward passes for the basic RNN and the LSTM. When using a deep learning framework, implementing the forward pass is sufficient to build systems that achieve great performance. The rest of this notebook is optional, and will not be graded. 3 - Backpropagation in recurrent neural networks (OPTIONAL / UNGRADED)In modern deep learning frameworks, you only have to implement the forward pass, and the framework takes care of the backward pass, so most deep learning engineers do not need to bother with the details of the backward pass. If however you are an expert in calculus and want to see the details of backprop in RNNs, you can work through this optional portion of the notebook. When in an earlier course you implemented a simple (fully connected) neural network, you used backpropagation to compute the derivatives with respect to the cost to update the parameters. Similarly, in recurrent neural networks you can to calculate the derivatives with respect to the cost in order to update the parameters. The backprop equations are quite complicated and we did not derive them in lecture. However, we will briefly present them below. 3.1 - Basic RNN backward passWe will start by computing the backward pass for the basic RNN-cell. **Figure 5**: RNN-cell's backward pass. Just like in a fully-connected neural network, the derivative of the cost function $J$ backpropagates through the RNN by following the chain-rule from calculas. The chain-rule is also used to calculate $(\frac{\partial J}{\partial W_{ax}},\frac{\partial J}{\partial W_{aa}},\frac{\partial J}{\partial b})$ to update the parameters $(W_{ax}, W_{aa}, b_a)$. Deriving the one step backward functions: To compute the `rnn_cell_backward` you need to compute the following equations. It is a good exercise to derive them by hand. The derivative of $\tanh$ is $1-\tanh(x)^2$. You can find the complete proof [here](https://www.wyzant.com/resources/lessons/math/calculus/derivative_proofs/tanx). Note that: $ \sec(x)^2 = 1 - \tanh(x)^2$Similarly for $\frac{ \partial a^{\langle t \rangle} } {\partial W_{ax}}, \frac{ \partial a^{\langle t \rangle} } {\partial W_{aa}}, \frac{ \partial a^{\langle t \rangle} } {\partial b}$, the derivative of $\tanh(u)$ is $(1-\tanh(u)^2)du$. The final two equations also follow same rule and are derived using the $\tanh$ derivative. Note that the arrangement is done in a way to get the same dimensions to match.
###Code
def rnn_cell_backward(da_next, cache):
"""
Implements the backward pass for the RNN-cell (single time-step).
Arguments:
da_next -- Gradient of loss with respect to next hidden state
cache -- python dictionary containing useful values (output of rnn_step_forward())
Returns:
gradients -- python dictionary containing:
dx -- Gradients of input data, of shape (n_x, m)
da_prev -- Gradients of previous hidden state, of shape (n_a, m)
dWax -- Gradients of input-to-hidden weights, of shape (n_a, n_x)
dWaa -- Gradients of hidden-to-hidden weights, of shape (n_a, n_a)
dba -- Gradients of bias vector, of shape (n_a, 1)
"""
# Retrieve values from cache
(a_next, a_prev, xt, parameters) = cache
# Retrieve values from parameters
Wax = parameters["Wax"]
Waa = parameters["Waa"]
Wya = parameters["Wya"]
ba = parameters["ba"]
by = parameters["by"]
### START CODE HERE ###
# compute the gradient of tanh with respect to a_next (≈1 line)
# dtanh = da_next * (1 - np.tanh(a_next)**2)
# a_next = Wax @ xt + Waa @ a_prev + ba
dtanh = da_next * (1 - np.tanh(Wax @ xt + Waa @ a_prev + ba)**2)
# compute the gradient of the loss with respect to Wax (≈2 lines)
dxt = Wax.T @ dtanh
dWax = dtanh @ xt.T
# compute the gradient with respect to Waa (≈2 lines)
da_prev = Waa.T @ dtanh
dWaa = dtanh @ a_prev.T
# compute the gradient with respect to b (≈1 line)
dba = np.sum(dtanh, axis=1, keepdims=True)
# print(da_next.shape, dtanh.shape, dxt.shape, dWax.shape, da_prev.shape, dWaa.shape, dba.shape)
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dWax": dWax, "dWaa": dWaa, "dba": dba}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
b = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a_next, yt, cache = rnn_cell_forward(xt, a_prev, parameters)
da_next = np.random.randn(5,10)
gradients = rnn_cell_backward(da_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = -0.460564103059 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = 0.0842968653807 **gradients["da_prev"].shape** = (5, 10) **gradients["dWax"][3][1]** = 0.393081873922 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = -0.28483955787 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [ 0.80517166] **gradients["dba"].shape** = (5, 1) Backward pass through the RNNComputing the gradients of the cost with respect to $a^{\langle t \rangle}$ at every time-step $t$ is useful because it is what helps the gradient backpropagate to the previous RNN-cell. To do so, you need to iterate through all the time steps starting at the end, and at each step, you increment the overall $db_a$, $dW_{aa}$, $dW_{ax}$ and you store $dx$.**Instructions**:Implement the `rnn_backward` function. Initialize the return variables with zeros first and then loop through all the time steps while calling the `rnn_cell_backward` at each time timestep, update the other variables accordingly.
###Code
def rnn_backward(da, caches):
"""
Implement the backward pass for a RNN over an entire sequence of input data.
Arguments:
da -- Upstream gradients of all hidden states, of shape (n_a, m, T_x)
caches -- tuple containing information from the forward pass (rnn_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient w.r.t. the input data, numpy-array of shape (n_x, m, T_x)
da0 -- Gradient w.r.t the initial hidden state, numpy-array of shape (n_a, m)
dWax -- Gradient w.r.t the input's weight matrix, numpy-array of shape (n_a, n_x)
dWaa -- Gradient w.r.t the hidden state's weight matrix, numpy-arrayof shape (n_a, n_a)
dba -- Gradient w.r.t the bias, of shape (n_a, 1)
"""
### START CODE HERE ###
# Retrieve values from the first cache (t=1) of caches (≈2 lines)
(caches, x) = caches
(a1, a0, x1, parameters) = caches[0]
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈6 lines)
dx = np.zeros((n_x, m, T_x))
dWax = np.zeros_like((parameters['Wax']))
dWaa = np.zeros_like((parameters['Waa']))
dba = np.zeros_like((parameters['ba']))
da0 = np.zeros_like(a0)
da_prevt = np.zeros_like(a0) #da[:,:,-1]
# Loop through all the time steps
for t in reversed(range(T_x)):
# Compute gradients at time step t. Choose wisely the "da_next" and the "cache" to use in the backward propagation step. (≈1 line)
gradients = rnn_cell_backward(da[:,:,t] + da_prevt, caches[t])
# Retrieve derivatives from gradients (≈ 1 line)
dxt, da_prevt, dWaxt, dWaat, dbat = gradients["dxt"], gradients["da_prev"], gradients["dWax"], gradients["dWaa"], gradients["dba"]
# Increment global derivatives w.r.t parameters by adding their derivative at time-step t (≈4 lines)
# print(dxt.shape, da_prevt.shape, dWaxt.shape, dWaat.shape, dbat.shape)
dx[:, :, t] = dxt
dWax += dWaxt
dWaa += dWaat
dba += dbat
# Set da0 to the gradient of a which has been backpropagated through all time-steps (≈1 line)
da0 = da_prevt
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWax": dWax, "dWaa": dWaa,"dba": dba}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,4)
a0 = np.random.randn(5,10)
Wax = np.random.randn(5,3)
Waa = np.random.randn(5,5)
Wya = np.random.randn(2,5)
ba = np.random.randn(5,1)
by = np.random.randn(2,1)
parameters = {"Wax": Wax, "Waa": Waa, "Wya": Wya, "ba": ba, "by": by}
a, y, caches = rnn_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = rnn_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWax\"][3][1] =", gradients["dWax"][3][1])
print("gradients[\"dWax\"].shape =", gradients["dWax"].shape)
print("gradients[\"dWaa\"][1][2] =", gradients["dWaa"][1][2])
print("gradients[\"dWaa\"].shape =", gradients["dWaa"].shape)
print("gradients[\"dba\"][4] =", gradients["dba"][4])
print("gradients[\"dba\"].shape =", gradients["dba"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dx"][1][2]** = [-2.07101689 -0.59255627 0.02466855 0.01483317] **gradients["dx"].shape** = (3, 10, 4) **gradients["da0"][2][3]** = -0.314942375127 **gradients["da0"].shape** = (5, 10) **gradients["dWax"][3][1]** = 11.2641044965 **gradients["dWax"].shape** = (5, 3) **gradients["dWaa"][1][2]** = 2.30333312658 **gradients["dWaa"].shape** = (5, 5) **gradients["dba"][4]** = [-0.74747722] **gradients["dba"].shape** = (5, 1) 3.2 - LSTM backward pass 3.2.1 One Step backwardThe LSTM backward pass is slighltly more complicated than the forward one. We have provided you with all the equations for the LSTM backward pass below. (If you enjoy calculus exercises feel free to try deriving these from scratch yourself.) 3.2.2 gate derivatives$$d \Gamma_o^{\langle t \rangle} = da_{next}*\tanh(c_{next}) * \Gamma_o^{\langle t \rangle}*(1-\Gamma_o^{\langle t \rangle})\tag{7}$$$$d\tilde c^{\langle t \rangle} = dc_{next}*\Gamma_i^{\langle t \rangle}+ \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * i_t * da_{next} * \tilde c^{\langle t \rangle} * (1-\tanh(\tilde c)^2) \tag{8}$$$$d\Gamma_u^{\langle t \rangle} = dc_{next}*\tilde c^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * \tilde c^{\langle t \rangle} * da_{next}*\Gamma_u^{\langle t \rangle}*(1-\Gamma_u^{\langle t \rangle})\tag{9}$$$$d\Gamma_f^{\langle t \rangle} = dc_{next}*\tilde c_{prev} + \Gamma_o^{\langle t \rangle} (1-\tanh(c_{next})^2) * c_{prev} * da_{next}*\Gamma_f^{\langle t \rangle}*(1-\Gamma_f^{\langle t \rangle})\tag{10}$$ 3.2.3 parameter derivatives $$ dW_f = d\Gamma_f^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{11} $$$$ dW_u = d\Gamma_u^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{12} $$$$ dW_c = d\tilde c^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{13} $$$$ dW_o = d\Gamma_o^{\langle t \rangle} * \begin{pmatrix} a_{prev} \\ x_t\end{pmatrix}^T \tag{14}$$To calculate $db_f, db_u, db_c, db_o$ you just need to sum across the horizontal (axis= 1) axis on $d\Gamma_f^{\langle t \rangle}, d\Gamma_u^{\langle t \rangle}, d\tilde c^{\langle t \rangle}, d\Gamma_o^{\langle t \rangle}$ respectively. Note that you should have the `keep_dims = True` option.Finally, you will compute the derivative with respect to the previous hidden state, previous memory state, and input.$$ da_{prev} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c^{\langle t \rangle} + W_o^T * d\Gamma_o^{\langle t \rangle} \tag{15}$$Here, the weights for equations 13 are the first n_a, (i.e. $W_f = W_f[:n_a,:]$ etc...)$$ dc_{prev} = dc_{next}\Gamma_f^{\langle t \rangle} + \Gamma_o^{\langle t \rangle} * (1- \tanh(c_{next})^2)*\Gamma_f^{\langle t \rangle}*da_{next} \tag{16}$$$$ dx^{\langle t \rangle} = W_f^T*d\Gamma_f^{\langle t \rangle} + W_u^T * d\Gamma_u^{\langle t \rangle}+ W_c^T * d\tilde c_t + W_o^T * d\Gamma_o^{\langle t \rangle}\tag{17} $$where the weights for equation 15 are from n_a to the end, (i.e. $W_f = W_f[n_a:,:]$ etc...)**Exercise:** Implement `lstm_cell_backward` by implementing equations $7-17$ below. Good luck! :)
###Code
def lstm_cell_backward(da_next, dc_next, cache):
"""
Implement the backward pass for the LSTM-cell (single time-step).
Arguments:
da_next -- Gradients of next hidden state, of shape (n_a, m)
dc_next -- Gradients of next cell state, of shape (n_a, m)
cache -- cache storing information from the forward pass
Returns:
gradients -- python dictionary containing:
dxt -- Gradient of input data at time-step t, of shape (n_x, m)
da_prev -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dc_prev -- Gradient w.r.t. the previous memory state, of shape (n_a, m, T_x)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the input gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve information from "cache"
(a_next, c_next, a_prev, c_prev, ft, it, cct, ot, xt, parameters) = cache
### START CODE HERE ###
# Retrieve dimensions from xt's and a_next's shape (≈2 lines)
n_x, m = xt.shape
n_a, m = a_next.shape
# Compute gates related derivatives, you can find their values can be found by looking carefully at equations (7) to (10) (≈4 lines)
dc_next += da_next * (1 - np.tanh(c_next)**2) * ot
# Code equations (7) to (10) (≈4 lines)
dot = np.tanh(c_next) * da_next * ot * (1-ot)
dcct = dc_next * it * (1 - cct**2)
dit = dc_next * cct * it * (1 - it)
dft = dc_next * c_prev * ft * (1 - ft)
# Compute parameters related derivatives. Use equations (11)-(14) (≈8 lines)
concatt = np.vstack((a_prev, xt)).T
dWf = dft @ concatt
dWi = dit @ concatt
dWc = dcct @ concatt
dWo = dot @ concatt
dbf = np.sum(dft ,axis=1, keepdims=True)
dbi = np.sum(dit ,axis=1, keepdims=True)
dbc = np.sum(dcct ,axis=1, keepdims=True)
dbo = np.sum(dot ,axis=1, keepdims=True)
# print(dWf.shape, dWi.shape, dWc.shape, dWo.shape)
# print(dbf.shape, dbi.shape, dbc.shape, dbo.shape)
# Compute derivatives w.r.t previous hidden state, previous memory state and input. Use equations (15)-(17). (≈3 lines)
# print(Wf.T.shape, dft.shape, Wi.T.shape, dit.shape, Wc.T.shape, dcct.shape, Wo.T.shape, dot.shape)
# print(a_prev.shape, xt.shape, concatt.shape, Wf.T.shape, Wf.T[:n_a,:].shape, Wf.T[n_a:,:].shape)
Wf, Wi, Wc, Wo = parameters['Wf'], parameters['Wi'], parameters['Wc'], parameters['Wo']
da_prev = (Wf.T @ dft + Wi.T @ dit + Wc.T @ dcct + Wo.T @ dot)[:n_a,:]
dc_prev = dc_next * ft
dxt = (Wf.T @ dft + Wi.T @ dit + Wc.T @ dcct + Wo.T @ dot)[n_a:,:]
### END CODE HERE ###
# Save gradients in dictionary
gradients = {"dxt": dxt, "da_prev": da_prev, "dc_prev": dc_prev, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
xt = np.random.randn(3,10)
a_prev = np.random.randn(5,10)
c_prev = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
Wy = np.random.randn(2,5)
by = np.random.randn(2,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a_next, c_next, yt, cache = lstm_cell_forward(xt, a_prev, c_prev, parameters)
da_next = np.random.randn(5,10)
dc_next = np.random.randn(5,10)
gradients = lstm_cell_backward(da_next, dc_next, cache)
print("gradients[\"dxt\"][1][2] =", gradients["dxt"][1][2])
print("gradients[\"dxt\"].shape =", gradients["dxt"].shape)
print("gradients[\"da_prev\"][2][3] =", gradients["da_prev"][2][3])
print("gradients[\"da_prev\"].shape =", gradients["da_prev"].shape)
print("gradients[\"dc_prev\"][2][3] =", gradients["dc_prev"][2][3])
print("gradients[\"dc_prev\"].shape =", gradients["dc_prev"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____
###Markdown
**Expected Output**: **gradients["dxt"][1][2]** = 3.23055911511 **gradients["dxt"].shape** = (3, 10) **gradients["da_prev"][2][3]** = -0.0639621419711 **gradients["da_prev"].shape** = (5, 10) **gradients["dc_prev"][2][3]** = 0.797522038797 **gradients["dc_prev"].shape** = (5, 10) **gradients["dWf"][3][1]** = -0.147954838164 **gradients["dWf"].shape** = (5, 8) **gradients["dWi"][1][2]** = 1.05749805523 **gradients["dWi"].shape** = (5, 8) **gradients["dWc"][3][1]** = 2.30456216369 **gradients["dWc"].shape** = (5, 8) **gradients["dWo"][1][2]** = 0.331311595289 **gradients["dWo"].shape** = (5, 8) **gradients["dbf"][4]** = [ 0.18864637] **gradients["dbf"].shape** = (5, 1) **gradients["dbi"][4]** = [-0.40142491] **gradients["dbi"].shape** = (5, 1) **gradients["dbc"][4]** = [ 0.25587763] **gradients["dbc"].shape** = (5, 1) **gradients["dbo"][4]** = [ 0.13893342] **gradients["dbo"].shape** = (5, 1) 3.3 Backward pass through the LSTM RNNThis part is very similar to the `rnn_backward` function you implemented above. You will first create variables of the same dimension as your return variables. You will then iterate over all the time steps starting from the end and call the one step function you implemented for LSTM at each iteration. You will then update the parameters by summing them individually. Finally return a dictionary with the new gradients. **Instructions**: Implement the `lstm_backward` function. Create a for loop starting from $T_x$ and going backward. For each step call `lstm_cell_backward` and update the your old gradients by adding the new gradients to them. Note that `dxt` is not updated but is stored.
###Code
def lstm_backward(da, caches):
"""
Implement the backward pass for the RNN with LSTM-cell (over a whole sequence).
Arguments:
da -- Gradients w.r.t the hidden states, numpy-array of shape (n_a, m, T_x)
dc -- Gradients w.r.t the memory states, numpy-array of shape (n_a, m, T_x)
caches -- cache storing information from the forward pass (lstm_forward)
Returns:
gradients -- python dictionary containing:
dx -- Gradient of inputs, of shape (n_x, m, T_x)
da0 -- Gradient w.r.t. the previous hidden state, numpy array of shape (n_a, m)
dWf -- Gradient w.r.t. the weight matrix of the forget gate, numpy array of shape (n_a, n_a + n_x)
dWi -- Gradient w.r.t. the weight matrix of the update gate, numpy array of shape (n_a, n_a + n_x)
dWc -- Gradient w.r.t. the weight matrix of the memory gate, numpy array of shape (n_a, n_a + n_x)
dWo -- Gradient w.r.t. the weight matrix of the save gate, numpy array of shape (n_a, n_a + n_x)
dbf -- Gradient w.r.t. biases of the forget gate, of shape (n_a, 1)
dbi -- Gradient w.r.t. biases of the update gate, of shape (n_a, 1)
dbc -- Gradient w.r.t. biases of the memory gate, of shape (n_a, 1)
dbo -- Gradient w.r.t. biases of the save gate, of shape (n_a, 1)
"""
# Retrieve values from the first cache (t=1) of caches.
(caches, x) = caches
(a1, c1, a0, c0, f1, i1, cc1, o1, x1, parameters) = caches[0]
### START CODE HERE ###
# Retrieve dimensions from da's and x1's shapes (≈2 lines)
n_a, m, T_x = da.shape
n_x, m = x1.shape
# initialize the gradients with the right sizes (≈12 lines)
dx = np.zeros((n_x, m, T_x))
da0 = np.zeros((n_a, m))
da_prevt = np.zeros((n_a, m)) #da[:,:,-1]
dc_prevt = np.zeros((n_a, m))
dWf = np.zeros((n_a, n_a + n_x))
dWi = np.zeros((n_a, n_a + n_x))
dWc = np.zeros((n_a, n_a + n_x))
dWo = np.zeros((n_a, n_a + n_x))
dbf = np.zeros((n_a, 1))
dbi = np.zeros((n_a, 1))
dbc = np.zeros((n_a, 1))
dbo = np.zeros((n_a, 1))
# loop back over the whole sequence
for t in reversed(range(T_x)):
print(t)
# Compute all gradients using lstm_cell_backward
gradients = lstm_cell_backward(da[:,:,t] + da_prevt, dc_prevt, caches[t])
# Store or add the gradient to the parameters' previous step's gradient
da_prevt, dc_prevt = gradients['da_prev'], gradients['dc_prev']
dx[:,:,t] = gradients['dxt']
dWf += gradients['dWf']
dWi += gradients['dWi']
dWc += gradients['dWc']
dWo += gradients['dWo']
dbf += gradients['dbf']
dbi += gradients['dbi']
dbc += gradients['dbc']
dbo += gradients['dbo']
# Set the first activation's gradient to the backpropagated gradient da_prev.
da0 = da_prevt
### END CODE HERE ###
# Store the gradients in a python dictionary
gradients = {"dx": dx, "da0": da0, "dWf": dWf,"dbf": dbf, "dWi": dWi,"dbi": dbi,
"dWc": dWc,"dbc": dbc, "dWo": dWo,"dbo": dbo}
return gradients
np.random.seed(1)
x = np.random.randn(3,10,7)
a0 = np.random.randn(5,10)
Wf = np.random.randn(5, 5+3)
bf = np.random.randn(5,1)
Wi = np.random.randn(5, 5+3)
bi = np.random.randn(5,1)
Wo = np.random.randn(5, 5+3)
bo = np.random.randn(5,1)
Wc = np.random.randn(5, 5+3)
bc = np.random.randn(5,1)
parameters = {"Wf": Wf, "Wi": Wi, "Wo": Wo, "Wc": Wc, "Wy": Wy, "bf": bf, "bi": bi, "bo": bo, "bc": bc, "by": by}
a, y, c, caches = lstm_forward(x, a0, parameters)
da = np.random.randn(5, 10, 4)
gradients = lstm_backward(da, caches)
print("gradients[\"dx\"][1][2] =", gradients["dx"][1][2])
print("gradients[\"dx\"].shape =", gradients["dx"].shape)
print("gradients[\"da0\"][2][3] =", gradients["da0"][2][3])
print("gradients[\"da0\"].shape =", gradients["da0"].shape)
print("gradients[\"dWf\"][3][1] =", gradients["dWf"][3][1])
print("gradients[\"dWf\"].shape =", gradients["dWf"].shape)
print("gradients[\"dWi\"][1][2] =", gradients["dWi"][1][2])
print("gradients[\"dWi\"].shape =", gradients["dWi"].shape)
print("gradients[\"dWc\"][3][1] =", gradients["dWc"][3][1])
print("gradients[\"dWc\"].shape =", gradients["dWc"].shape)
print("gradients[\"dWo\"][1][2] =", gradients["dWo"][1][2])
print("gradients[\"dWo\"].shape =", gradients["dWo"].shape)
print("gradients[\"dbf\"][4] =", gradients["dbf"][4])
print("gradients[\"dbf\"].shape =", gradients["dbf"].shape)
print("gradients[\"dbi\"][4] =", gradients["dbi"][4])
print("gradients[\"dbi\"].shape =", gradients["dbi"].shape)
print("gradients[\"dbc\"][4] =", gradients["dbc"][4])
print("gradients[\"dbc\"].shape =", gradients["dbc"].shape)
print("gradients[\"dbo\"][4] =", gradients["dbo"][4])
print("gradients[\"dbo\"].shape =", gradients["dbo"].shape)
###Output
_____no_output_____ |
notebooks/language_model_LSTM.ipynb | ###Markdown
LSTM Language ModelIn this notebook, we are going to make a Language Model using LSTMs. This is the "old-school" way to make language models. Recently, with the introduction of the Transformer architecture, one can successfully make a Language Model with better overall quality instead of using LSTM.
###Code
%load_ext autoreload
%autoreload 2
from practicalnlp import settings
from practicalnlp.models import *
from practicalnlp.training import *
from practicalnlp.data import *
import torch
###Output
_____no_output_____
###Markdown
Loading dataHere we load all data with `batch_size = 20`. It's important to note that we subdivide data with 2 parameters: `nctx` and `batch_size`. `nctx` is the number of words we are using in a single pass of a training phase. For example, the figure below ilustrates each *step* in the training phase for `nctx = 3` over a single `batch_size` of the entire sentence below. --->Arrows indicate that the origin word is trying to predict the next word in the `nctx` window. When the last word of the `nctx` window is processed, the window is translated by `nctx` words and the process repeats until it reads the entire batch. The `nctx` param is also known as `bptt` (*backpropagation through time*), and is the name used in the official PyTorch tutorial for Language Modeling.Although this example shows the execution for only a single batch, in practice, we do it for all batchs at the same time. It might be easy to understand how it can be done in practice with a 2-dimensional tensor (one dimension for batch size, and other for the sequence length). In the code below, we do it using PyTorch.
###Code
batch_size = 20
nctx = 35
TRAIN = settings.WIKI_TRAIN_DATA
VALID = settings.WIKI_VALID_DATA
reader = WordDatasetReader(nctx)
reader.build_vocab((TRAIN,))
train_set = reader.load(TRAIN, batch_size)
valid_set = reader.load(VALID, batch_size)
train_set.shape
model = LSTMLanguageModel(len(reader.vocab), 512, 512)
model.to('cuda:0')
num_params = sum(p.numel() for p in model.parameters() if p.requires_grad)
print(f"Model has {num_params} parameters")
learnable_params = [p for p in model.parameters() if p.requires_grad]
optimizer = torch.optim.Adam(learnable_params, lr=0.001)
fit_lm(model, optimizer, 1, batch_size, nctx, train_set, valid_set)
def sample(model, index2word, start_word='the', maxlen=20):
model.eval()
words = [start_word]
x = torch.tensor(reader.vocab.get(start_word)).long().reshape(1, 1).to('cuda:0')
hidden = model.init_hidden(1)
with torch.no_grad():
for i in range(20):
output, hidden = model(x, hidden)
word_softmax = output.squeeze().exp().cpu()
selected = torch.multinomial(word_softmax, 1)[0]
x.fill_(selected)
word = index2word[selected.item()]
words.append(word)
words.append('...')
return words
index2word = {i: w for w, i in reader.vocab.items()}
words = sample(model, index2word)
print(' '.join(words))
###Output
the American album were 28 – 3 . During 1907 , suspicions remained the first time since 1972 with him just ...
|
tutorials/notebook/programming_guide/mindspore_parameter.ipynb | ###Markdown
Parameter 概述`Parameter`是变量张量,代表在训练网络时,需要被更新的参数。本章主要介绍了`Parameter`的初始化以及属性和方法的使用,同时介绍了`ParameterTuple`。> 本文档适用于CPU、GPU和Ascend环境。 初始化 ```pythonmindspore.Parameter(default_input, name, requires_grad=True, layerwise_parallel=False)``` 初始化一个`Parameter`对象,传入的数据支持`Tensor`、`Initializer`、`int`和`float`四种类型。`Initializer`是初始化器,可调用`initializer`接口生成`Initializer`对象。当网络采用半自动或者全自动并行策略,并且使用`Initializer`初始化`Parameter`时,`Parameter`里保存的不是`Tensor`,而是`MetaTensor`。`MetaTensor`与`Tensor`不同,`MetaTensor`仅保存张量的形状和类型,而不保存实际数据,所以不会占用任何内存,可调用`init_data`接口将`Parameter`里保存的`MetaTensor`转化为`Tensor`。可为每个`Parameter`指定一个名称,便于后续操作和更新。当参数需要被更新时,需要将`requires_grad`设置为`True`。当`layerwise_parallel`(混合并行)配置为`True`时,参数广播和参数梯度聚合时会过滤掉该参数。有关分布式并行的相关配置,可以参考文档:https://www.mindspore.cn/doc/programming_guide/zh-CN/master/auto_parallel.html 。下例通过三种不同的数据类型构造了`Parameter`,三个`Parameter`都需要更新,都不采用layerwise并行。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter
from mindspore import dtype as mstype
from mindspore.common.initializer import initializer
x = Parameter(default_input=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
y = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32), name='y')
z = Parameter(default_input=2.0, name='z')
print(x, "\n\n", y, "\n\n", z)
###Output
Parameter (name=x)
Parameter (name=y)
Parameter (name=z)
###Markdown
属性* `inited_param`:返回保存了实际数据的`Parameter`,如果`Parameter`原本保存的是`MetaTensor`,会将其转换为`Tensor`。* `name`:实例化`Parameter`时,为其指定的名字。* `sliced`:用在自动并行场景下,表示`Parameter`里保存的数据是否是分片数据。 如果是,就不再对其进行切分,如果不是,需要根据网络并行策略确认是否对其进行切分。 * `is_init`:`Parameter`的初始化状态。在GE后端,`Parameter`需要一个`init graph`来从主机同步数据到设备侧,该标志表示数据是否已同步到设备。 此标志仅在GE后端起作用,其他后端将被设置为False。* `layerwise_parallel`:`Parameter`是否支持layerwise并行。如果支持,参数就不会进行广播和梯度聚合,反之则需要。* `requires_grad`:是否需要计算参数梯度。如果参数需要被训练,则需要计算参数梯度,否则不需要。* `data`: `Parameter`本身。下例通过`Tensor`初始化一个`Parameter`,获取了`Parameter`的相关属性。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter
x = Parameter(default_input=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
print("name: ", x.name, "\n",
"sliced: ", x.sliced, "\n",
"is_init: ", x.is_init, "\n",
"inited_param: ", x.inited_param, "\n",
"requires_grad: ", x.requires_grad, "\n",
"layerwise_parallel: ", x.layerwise_parallel, "\n",
"data: ", x.data)
###Output
name: x
sliced: False
is_init: False
inited_param: None
requires_grad: True
layerwise_parallel: False
data: Parameter (name=x)
###Markdown
方法* `init_data`:在网络采用半自动或者全自动并行策略的场景下, 当初始化`Parameter`传入的数据是`Initializer`时,可调用该接口将`Parameter`保存的数据转换为`Tensor`。* `set_data`:设置`Parameter`保存的数据,支持传入`Tensor`、`Initializer`、`int`和`float`进行设置, 将方法的入参`slice_shape`设置为True时,可改变`Parameter`的shape,反之,设置的数据shape必须与`Parameter`原来的shape保持一致。* `set_param_ps`:控制训练参数是否通过[Parameter Server](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_parameter_server_training.html)进行训练。* `clone`:克隆`Parameter`,需要指定克隆之后的参数名称。下例通过`Initializer`来初始化`Tensor`,调用了`Parameter`的相关方法。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter
from mindspore import dtype as mstype
from mindspore.common.initializer import initializer
x = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32))
print(x)
x_clone = x.clone()
x_clone.name = "x_clone"
print(x_clone)
print(x.init_data())
print(x.set_data(data=Tensor(np.arange(2*3).reshape((1, 2, 3)))))
###Output
Parameter (name=Parameter)
Parameter (name=x_clone)
Parameter (name=Parameter)
Parameter (name=Parameter)
###Markdown
ParameterTuple继承于`tuple`,用于保存多个`Parameter`,通过`__new__(cls, iterable)`传入一个存放`Parameter`的迭代器进行构造,提供`clone`接口进行克隆。下例构造了一个`ParameterTuple`对象,并进行了克隆。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter, ParameterTuple
from mindspore import dtype as mstype
from mindspore.common.initializer import initializer
x = Parameter(default_input=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
y = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32), name='y')
z = Parameter(default_input=2.0, name='z')
params = ParameterTuple((x, y, z))
params_copy = params.clone("params_copy")
print(params, "\n")
print(params_copy)
###Output
(Parameter (name=x), Parameter (name=y), Parameter (name=z))
(Parameter (name=params_copy.x), Parameter (name=params_copy.y), Parameter (name=params_copy.z))
###Markdown
Parameter 概述`Parameter`是变量张量,代表在训练网络时,需要被更新的参数。本章主要介绍了`Parameter`的初始化以及属性和方法的使用,同时介绍了`ParameterTuple`。> 本文档适用于CPU、GPU和Ascend环境。 初始化 ```pythonmindspore.Parameter(default_input, name, requires_grad=True, layerwise_parallel=False)``` 初始化一个`Parameter`对象,传入的数据支持`Tensor`、`Initializer`、`int`和`float`四种类型。`Initializer`是初始化器,可调用`initializer`接口生成`Initializer`对象。当使用`init`去初始化`Tensor`时,`Tensor`仅保存张量的形状和类型,而不保存实际数据,所以不会占用任何内存,可调用`init_data`接口将`Parameter`里保存的`Tensor`转化为数据。可为每个`Parameter`指定一个名称,便于后续操作和更新。当参数需要被更新时,需要将`requires_grad`设置为`True`。当`layerwise_parallel`(混合并行)配置为`True`时,参数广播和参数梯度聚合时会过滤掉该参数。有关分布式并行的相关配置,可以参考文档:https://www.mindspore.cn/doc/programming_guide/zh-CN/master/auto_parallel.html 。下例通过三种不同的数据类型构造了`Parameter`,三个`Parameter`都需要更新,都不采用layerwise并行。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter
from mindspore import dtype as mstype
from mindspore.common.initializer import initializer
x = Parameter(default_input=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
y = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32), name='y')
z = Parameter(default_input=2.0, name='z')
print(x, "\n\n", y, "\n\n", z)
###Output
Parameter (name=x)
Parameter (name=y)
Parameter (name=z)
###Markdown
属性* `inited_param`:返回保存了实际数据的`Parameter`。* `name`:实例化`Parameter`时,为其指定的名字。* `sliced`:用在自动并行场景下,表示`Parameter`里保存的数据是否是分片数据。 如果是,就不再对其进行切分,如果不是,需要根据网络并行策略确认是否对其进行切分。 * `is_init`:`Parameter`的初始化状态。在GE后端,`Parameter`需要一个`init graph`来从主机同步数据到设备侧,该标志表示数据是否已同步到设备。 此标志仅在GE后端起作用,其他后端将被设置为False。* `layerwise_parallel`:`Parameter`是否支持layerwise并行。如果支持,参数就不会进行广播和梯度聚合,反之则需要。* `requires_grad`:是否需要计算参数梯度。如果参数需要被训练,则需要计算参数梯度,否则不需要。* `data`: `Parameter`本身。下例通过`Tensor`初始化一个`Parameter`,获取了`Parameter`的相关属性。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter
x = Parameter(default_input=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
print("name: ", x.name, "\n",
"sliced: ", x.sliced, "\n",
"is_init: ", x.is_init, "\n",
"inited_param: ", x.inited_param, "\n",
"requires_grad: ", x.requires_grad, "\n",
"layerwise_parallel: ", x.layerwise_parallel, "\n",
"data: ", x.data)
###Output
name: x
sliced: False
is_init: False
inited_param: None
requires_grad: True
layerwise_parallel: False
data: Parameter (name=x)
###Markdown
方法* `init_data`:在网络采用半自动或者全自动并行策略的场景下, 当初始化`Parameter`传入的数据是`Initializer`时,可调用该接口将`Parameter`保存的数据转换为`Tensor`。* `set_data`:设置`Parameter`保存的数据,支持传入`Tensor`、`Initializer`、`int`和`float`进行设置, 将方法的入参`slice_shape`设置为True时,可改变`Parameter`的shape,反之,设置的数据shape必须与`Parameter`原来的shape保持一致。* `set_param_ps`:控制训练参数是否通过[Parameter Server](https://www.mindspore.cn/tutorial/training/zh-CN/master/advanced_use/apply_parameter_server_training.html)进行训练。* `clone`:克隆`Parameter`,需要指定克隆之后的参数名称。下例通过`Initializer`来初始化`Tensor`,调用了`Parameter`的相关方法。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter
from mindspore import dtype as mstype
from mindspore.common.initializer import initializer
x = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32))
print(x)
x_clone = x.clone()
x_clone.name = "x_clone"
print(x_clone)
print(x.init_data())
print(x.set_data(data=Tensor(np.arange(2*3).reshape((1, 2, 3)))))
###Output
Parameter (name=Parameter)
Parameter (name=x_clone)
Parameter (name=Parameter)
Parameter (name=Parameter)
###Markdown
ParameterTuple继承于`tuple`,用于保存多个`Parameter`,通过`__new__(cls, iterable)`传入一个存放`Parameter`的迭代器进行构造,提供`clone`接口进行克隆。下例构造了一个`ParameterTuple`对象,并进行了克隆。如下:
###Code
import numpy as np
from mindspore import Tensor, Parameter, ParameterTuple
from mindspore import dtype as mstype
from mindspore.common.initializer import initializer
x = Parameter(default_input=Tensor(np.arange(2*3).reshape((2, 3))), name="x")
y = Parameter(default_input=initializer('ones', [1, 2, 3], mstype.float32), name='y')
z = Parameter(default_input=2.0, name='z')
params = ParameterTuple((x, y, z))
params_copy = params.clone("params_copy")
print(params, "\n")
print(params_copy)
###Output
(Parameter (name=x), Parameter (name=y), Parameter (name=z))
(Parameter (name=params_copy.x), Parameter (name=params_copy.y), Parameter (name=params_copy.z))
|
notebooks/7-final-demo.ipynb | ###Markdown
*Creating a Jupyter Book with The Turing Way* Module 7: Demo of Jupyter Book features in _The Turing Way_**Learning Objective:** - Explain Sphinx features in Jupyter Book for referencing resources and create a bibliography for the book - Show how one can cross reference chapters and sections within the book - Show how to use figures in the chapters that can be cross references - Demo the collaborative workflow to read, edit and export files directly from the chapters 📹 [VIDEO](https://www.youtube.com/watch?v=WshFJWtplsk&list=PLBxcQEfGu3Dmdo6oKg6o9V7Q_e7WSX-vu&index=8)--- Citing and ReferencingJupyter Book uses a centralised bibtex file containing all references.In this tutorial, we have provided you with a `references.bib`, which contains all the citations used in the `overview` chapter. you can open this file in Jupyter Notebook to inspect the different entries.It's crucial that this bibtex file is located within the folder where you store all the components for you Jupyter Book, which in this tutorial is `book`.A complete version of this file for _The Turing Way_ can be found [here](https://github.com/alan-turing-institute/the-turing-way/blob/master/book/website/_bibliography/references.bib). Adding a new reference in `references.bib`You can edit references locally by editing `references.bib` directly using a text editor (or in Jupyter Notebook for this tutorial).There are many online tools and browser plugins that allow exporting references in bibtex format. For example, see [ZoteroBib](https://zbib.org/) that takes a URL or DOI as input and allows exporting the reference in bibtex format (including many others).The browser extensions like [BibItNow!](https://addons.mozilla.org/en-GB/firefox/addon/bibitnow/) in Firefox and [BibTeX entry from URL](https://chrome.google.com/webstore/detail/bibtex-entry-from-ur) in chrome are also useful for copying the reference for online materials.Bibligraphy managing program such as JabRef (linux, windows, macOS) or BibDesk (macOS) can also be used for editing references.bib directly.For example, in the `Overview` chapter in *The Turing Way*, and in your example book, we cite [Reproducibility crisis?, Baker, M., Nature, 533(26):353–66, 2016](https://www.icts.uci.edu/education/ffast1/nature.pdf).Entry for this citation in bibtex is follwing (look up the first entry in the `references.bib` file):```@article{baker2016reproducibility, author={Baker, Monya}, title={Reproducibility crisis?}, journal={Nature}, volume={533}, number={26}, pages={353--66}, year={2016}}``` Add a citation in a Jupyter Book chapterTo include a citation in your content, we follow the recommendation by JupyterBook that is built on top of [Sphinx](https://jupyterbook.org/reference/glossary.htmlterm-Sphinx).To include a reference using "{cite}`CITEKEY`", where CITEKEY is the corresponding citation key in `references.bib`.We will cite the article that we used in the previous section as an example, CITEKEY for which is `baker2016reproducibility`.We can cite this by adding the following syntax in the content to link citation:```{cite}`baker2016reproducibility````This will appear in your chapter as `[bak16]`.You can also give an alternative title like so:```{cite}`M. Baker (2016)````This will appear in your chapter as `M. Baker (2016)` Creating a bibliography in the Jupyter BookLet's add the entire bibliography in the end of this book.When we cite a resource, a link is created with the citation, which can be clicked to see the details.In *The Turing Way*, those links will direct our readers to the [bibliography](https://the-turing-way.netlify.app/afterword/bibliography.html) page, which we have not included in this tutorial. Exercise: Let's create a bibliography page for our Jupyter BookTo create a bibliography page, we have provided `bibliography.md` file in the `content` folder.To include it in your Jupyter Book, first copy the file over to the `book` folder, and inspect its content.
###Code
!cp content/bibliography.md book/bibliography.md
###Output
_____no_output_____
###Markdown
The following syntax in the file generates a bibliography for your entire bibtex file.**Note**: Please see this block in edit mode as it includes three back ticks "```" in the beginning and end of the syntax.``````{bibliography} path/to/references.bib`````` Edit your `_toc.yml` file manually by appending the file entry:```file: bibliography.mdtitle: Bibliography```And, rebuild your book:
###Code
!jupyter-book build ../book/
###Output
_____no_output_____
###Markdown
*Creating a Jupyter Book with The Turing Way* Module 7: Demo of Jupyter Book features in _The Turing Way_**Learning Objective:** - Explain Sphinx features in Jupyter Book for referencing resources and create a bibliography for the book - Show how one can cross reference chapters and sections within the book - Show how to use figures in the chapters that can be cross references - Demo the collaborative workflow to read, edit and export files directly from the chapters Citing and ReferencingJupyter Book uses a centralised bibtex file containing all references.In this tutorial, we have provided you with a `references.bib`, which contains all the citations used in the `Overview` chapter. You can open this file in Jupyter Notebook to inspect the different entries.It's crucial that this bibtex file is located within the folder where you store all the components for you Jupyter Book.A complete version of this file for _The Turing Way_ can be found [here](https://github.com/alan-turing-institute/the-turing-way/blob/master/book/website/_bibliography/references.bib). Adding a new reference in `references.bib`You can edit references locally by editing `references.bib` directly using a text editor (or in Jupyter Notebook for this tutorial).There are many online tools and browser plugins that allow exporting references in bibtex format. For example, see [ZoteroBib](https://zbib.org/) that takes a URL or DOI as input and allows exporting the reference in bibtex format (including many others).The browser extensions like [BibItNow!](https://addons.mozilla.org/en-GB/firefox/addon/bibitnow/) in Firefox and [BibTeX entry from URL](https://chrome.google.com/webstore/detail/bibtex-entry-from-ur) in chrome are also useful for copying the reference for online materials.Bibligraphy managing program such as JabRef (linux, windows, macOS) or BibDesk (macOS) can also be used for editing `references.bib` directly.For example, in the `Overview` chapter in *The Turing Way*, and in your example book, we cite [Reproducibility crisis?, Baker, M., Nature, 533(26):353–66, 2016](https://www.icts.uci.edu/education/ffast1/nature.pdf).Entry for this citation in bibtex is the follwing (look up the first entry in the `references.bib` file):```@article{baker2016reproducibility, author={Baker, Monya}, title={Reproducibility crisis?}, journal={Nature}, volume={533}, number={26}, pages={353--66}, year={2016}}``` Adding a citationTo include a citation in your content, we follow the recommendation by Jupyter Book that is built on top of [Sphinx](https://jupyterbook.org/reference/glossary.htmlterm-Sphinx).To include a reference using ``{cite}`CITEKEY` ``, where `CITEKEY` is the corresponding citation key in `references.bib`.We will cite the article that we used in the previous section as an example, so CITEKEY will correspond to `baker2016reproducibility`.We can link this citation by adding the following syntax to the content:```{cite}`baker2016reproducibility````This citation will appear in your chapter as `[bak16]`.You can also give an alternative title like so:```{cite}`M. Baker (2016)````The citation will now appear in your chapter as `M. Baker (2016)`. Creating a bibliography chapter in Jupyter BookLet's add the entire bibliography of our book as a separate chapter of Jupyter Book.When we cite a resource, a link is created with the citation, which can be clicked to see the details.In *The Turing Way*, those links will direct our readers to the [bibliography](https://the-turing-way.netlify.app/afterword/bibliography.html) page, which we have not included in this tutorial.___EXERCISE:___For this module we will be using the contents of the Jupyter Book that were used for module 6, that is, we will be building our book based on the folder named `book_module6`. To create a bibliography page, we have provided `bibliography.md` file in the `content` folder. To include it in your Jupyter Book, first copy the file over to the `book_module6` folder, and inspect its content.
###Code
!cp ../content/bibliography.md ../book_module6/bibliography.md
###Output
_____no_output_____
###Markdown
Let's print the content of `bibliography.md`:
###Code
cat ../book_module6/bibliography.md
###Output
_____no_output_____
###Markdown
The above syntax will generate a bibliography with all the references in your bibtex file. Edit your `_toc.yml` file manually by appending `bibliography.md` as a separate chapter:```- file: bibliography title: Bibliography```And, rebuild your book:
###Code
!jupyter-book build ../book_module6/
###Output
_____no_output_____
###Markdown
*Creating a Jupyter Book with The Turing Way* Module 7: Demo of Jupyter Book features in _The Turing Way_**Learning Objective:** - Explain Sphinx features in Jupyter Book for referencing resources and create a bibliography for the book - Show how one can cross reference chapters and sections within the book - Show how to use figures in the chapters that can be cross references - Demo the collaborative workflow to read, edit and export files directly from the chapters 📹 [VIDEO](TBA)--- Citing and ReferencingJupyter Book uses a centralised bibtex file containing all references.In this tutorial, we have provided you with a `references.bib`, which contains all the citations used in the `overview` chapter. you can open this file in Jupyter Notebook to inspect the different entries.It's crucial that this bibtex file is located within the folder where you store all the components for you Jupyter Book, which in this tutorial is `book`.A complete version of this file for _The Turing Way_ can be found [here](https://github.com/alan-turing-institute/the-turing-way/blob/master/book/website/_bibliography/references.bib). Adding a new reference in `references.bib`You can edit references locally by editing `references.bib` directly using a text editor (or in Jupyter Notebook for this tutorial).There are many online tools and browser plugins that allow exporting references in bibtex format. For example, see [ZoteroBib](https://zbib.org/) that takes a URL or DOI as input and allows exporting the reference in bibtex format (including many others).The browser extensions like [BibItNow!](https://addons.mozilla.org/en-GB/firefox/addon/bibitnow/) in Firefox and [BibTeX entry from URL](https://chrome.google.com/webstore/detail/bibtex-entry-from-ur) in chrome are also useful for copying the reference for online materials.Bibligraphy managing program such as JabRef (linux, windows, macOS) or BibDesk (macOS) can also be used for editing references.bib directly.For example, in the `Overview` chapter in *The Turing Way*, and in your example book, we cite [Reproducibility crisis?, Baker, M., Nature, 533(26):353–66, 2016](https://www.icts.uci.edu/education/ffast1/nature.pdf).Entry for this citation in bibtex is follwing (look up the first entry in the `references.bib` file):```@article{baker2016reproducibility, author={Baker, Monya}, title={Reproducibility crisis?}, journal={Nature}, volume={533}, number={26}, pages={353--66}, year={2016}}``` Add a citation in a Jupyter Book chapterTo include a citation in your content, we follow the recommendation by JupyterBook that is built on top of [Sphinx](https://jupyterbook.org/reference/glossary.htmlterm-Sphinx).To include a reference using "{cite}`CITEKEY`", where CITEKEY is the corresponding citation key in `references.bib`.We will cite the article that we used in the previous section as an example, CITEKEY for which is `baker2016reproducibility`.We can cite this by adding the following syntax in the content to link citation:```{cite}`baker2016reproducibility````This will appear in your chapter as `[bak16]`.You can also give an alternative title like so:```{cite}`M. Baker (2016)````This will appear in your chapter as `M. Baker (2016)` Creating a bibliography in the Jupyter BookLet's add the entire bibliography in the end of this book.When we cite a resource, a link is created with the citation, which can be clicked to see the details.In *The Turing Way*, those links will direct our readers to the [bibliography](https://the-turing-way.netlify.app/afterword/bibliography.html) page, which we have not included in this tutorial. Exercise: Let's create a bibliography page for our Jupyter BookTo create a bibliography page, we have provided `bibliography.md` file in the `content` folder.To include it in your Jupyter Book, first copy the file over to the `book` folder, and inspect its content.
###Code
!cp content/bibliography.md book/bibliography.md
###Output
_____no_output_____
###Markdown
The following syntax in the file generates a bibliography for your entire bibtex file.**Note**: Please see this block in edit mode as it includes three back ticks "```" in the beginning and end of the syntax.``````{bibliography} path/to/references.bib`````` Edit your `_toc.yml` file manually by appending the file entry:```file: bibliography.mdtitle: Bibliography```And, rebuild your book:
###Code
!jupyter-book build ../book/
###Output
_____no_output_____ |
workbooks/reports/waldo_report_card.ipynb | ###Markdown
WARNING Currently Defunct
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
#from __future__ import print_function, absolute_import, unicode_literals, division
#import six
#from six.moves import (zip, filter, map, reduce, input, range)
import sys
sys.path.append('..')
import about
import pathcustomize
#import os
#import pathlib
#import pickle
#import platform
about.about()
import pandas as pd
import math
import random
import matplotlib.pyplot as plt
import matplotlib.gridspec as gridspec
import matplotlib.patches as patches
import matplotlib.path as path
from mpltools import style
from mpltools import layout
from statsmodels.distributions.empirical_distribution import ECDF
from waldo.conf import settings
from waldo.wio.experiment import Experiment
import waldo.metrics.report_card as report_card
import waldo.collider
import waldo.tape as tp
import waldo.metrics.step_simulation as ssim
#DATA_DIR = settings.LOGISTICS['filesystem_data']
style.use('ggplot')
###Output
Python 2.7.9 (default, Apr 14 2015 12:54:25) [GCC 4.4.7 20120313 (Red Hat 4.4.7-1)] on linux2, Host: cody
###Markdown
ALL FUNCTIONS
###Code
def bridge_gaps(experiment, graph, threshold=0.001):
taper = tp.Taper(experiment=experiment, graph=graph)
start, end = taper.find_start_and_end_nodes()
gaps = taper.score_potential_gaps(start, end)
gt = taper.greedy_tape(gaps, threshold=threshold, add_edges=True)
graph = taper._graph
# def step_plot(ax, step_df, true_num=30, only_best=False):
# steps = []
# n_steps = len(step_df)
# xs = list(step_df['t0'])
# widths = list(step_df['lifespan'])
# height = 1
# color_cycle = ax._get_lines.color_cycle
# color1 = color_cycle.next()
# color2 = color_cycle.next()
# for y, (x, width) in enumerate(zip(xs, widths)):
# steps.append(patches.Rectangle((x,y), height=height, width=width,
# fill=True, fc=color1, ec=color1))
# for step in steps:
# ax.add_patch(step)
# xmax = 60
# ideal = patches.Rectangle((0,0), height=true_num, width=xmax,
# fill=True, ec=color2, fc=color2,
# alpha=0.5)
# ax.add_patch(ideal)
# ax.plot([0], color=color1, label='actual')
# ax.plot([0], color=color2, alpha=0.5, label='ideal')
# ax.set_xlim([0, xmax])
# ax.set_ylim([0, n_steps+1])
def plot_steps(ax, df, dividers=[]):
label = 'hi'
step_df = df
n_steps = len(step_df)
xmax = int(max(df['tN']))
ymax = n_steps + 1
steps = []
xs = list(step_df['t0'])
widths = list(step_df['lifespan'])
height = 1
color = ax._get_lines.color_cycle.next()
for y, (x, width) in enumerate(zip(xs, widths)):
steps.append(patches.Rectangle((x,y), height=height, width=width,
fill=True, fc=color, ec=color))
for step in steps:
ax.add_patch(step)
for d in dividers:
ax.plot([0, xmax], [d,d], color='black')
ax.plot([0], color=color, label=label)
ax.set_xlim([0, xmax])
ax.set_ylim([0, ymax])
ax.set_xlabel('t (min)')
def make_dividers(step_dfs):
dividers = [len(i) for i in step_dfs]
dividers = [0] + dividers
return np.cumsum(dividers)
def step_plot(ax, df):
a = df.sort('lifespan', ascending=False)
a['mid'] = (a['tN'] + a['t0'])/2
xmax = int(max(a['tN']))
short_is_less_than = 5
long_is_greater_than = 40
front_back_margin = 10
front_lim = front_back_margin
back_lim = xmax - front_back_margin
#shortest = a[a['lifespan'] <= short_is_less_than]
#a = a[a['lifespan'] > short_is_less_than]
#shortest.sort('tN', inplace=True, ascending=True)
#shortest.sort('t0', inplace=True, ascending=False)
#longest = a[a['lifespan'] >= long_is_greater_than]
#a = a[a['lifespan'] < long_is_greater_than]
#longest.sort('t0', inplace=True)
#longest.sort('tN', inplace=True, ascending=False)
front = a[a['t0'] <= front_lim]
a = a[a['t0'] > front_lim]
front.sort('tN', inplace=True, ascending=False)
back = a[a['tN'] >= back_lim]
a = a[a['tN'] < back_lim]
back.sort('t0', inplace=True, ascending=False)
mid = a
mid.sort('mid', inplace=True, ascending=False)
steps = [front, mid, back]
a = pd.concat(steps)
div = make_dividers(steps)
plot_steps(ax,a, dividers=div)
def calculate_duration_data_from_graph(experiment, graph, node_ids=[]):
if not node_ids:
node_ids = graph.nodes(data=False)
frame_times = experiment.frame_times
step_data, durations = [], []
for node in node_ids:
node_data = graph.node[node]
bf, df = node_data['born_f'], node_data['died_f']
t0 = frame_times[bf - 1]
tN = frame_times[df - 1]
step_data.append({'bid':node, 't0':t0, 'tN':tN, 'lifespan':tN-t0})
steps = pd.DataFrame(step_data)
steps.set_index('bid', inplace=True)
steps = steps / 60.0 # convert to minutes.
steps.sort('t0', inplace=True)
steps = steps[['t0', 'tN', 'lifespan']]
durations = np.array(steps['lifespan'])
return steps, durations
def make_cdf(ax, durations_list, label='', xmin=0, xmax=30):
x = np.linspace(xmin, xmax, 1000)
ecdf = ECDF(np.array(durations_list))
cdf = ecdf(x)
ax.plot(x, cdf, label=label, lw=2)
ax.set_xlabel('minutes')
ax.set_ylabel('CDF')
def find_nodes_containing_bids(graph, bids):
nodes_with_moved_bids = []
for node in graph:
if node in bids:
nodes_with_moved_bids.append(node)
continue
for c in graph.node[node].get('components', []):
if c in bids:
nodes_with_moved_bids.append(node)
break
return nodes_with_moved_bids
def show_reasons(df):
reasons = ['unknown', 'on_edge', 'id_change', 'outside-roi', 'timing']
if 'reason' not in df:
df['reason'] = 'unknown'
for reason in reasons[1:]:
df['reason'][df[reason]] = reason
print reason, len(df['reason'][df[reason]])
counts = {}
for reason in reasons:
counts[reason] = len(df[df['reason'] == reason])
unknown_ids = list(df[df['reason'] == 'unknown'].index)
id_change_ids = list(df[df['reason'] == 'id_change'].index)
return unknown_ids, id_change_ids
def rc_duration_hist_single(rc, step='roi'):
rc = rc.copy()
# grab bins from column names
cols = rc.columns
bins = [(int(c[1:-3]), c) for c in cols if str(c[0]) == '>']
bins.sort(reverse=True)
#print bins
# grab relevant step
step_data = rc.iloc[step]
print step_data[['step', '>10min','>20min','>30min','>40min','>50min']]
bin_data = []
running_total = 0
for bl, rc_label in bins:
a = step_data[rc_label]
n = a - running_total
running_total += n
#print bl, a, n
#print 'total', running_total
bin_data.append((bl, n))
bin_data.sort()
print bin_data
fig, ax = plt.subplots()
x = np.arange(len(bin_data))
labels, y = zip(*bin_data)
labels = [str(l) for l in labels]
ax.bar(x, y, annotate=True, xticklabels=labels, grid='y')
plt.show()
def rc_duration_hist(rc):
rc = rc.copy()
cols = rc.columns
bins = [(int(c[1:-3]), c) for c in cols if str(c[0]) == '>']
# grab bins from column names
bins.sort(reverse=True)
#print bins
def step_data_to_bins(step):
# grab relevant step
step_data = rc.iloc[step]
#print step_data[['step', '>10min','>20min','>30min','>40min','>50min']]
bin_data = []
running_total = 0
for bl, rc_label in bins:
a = step_data[rc_label]
n = a - running_total
running_total += n
#print bl, a, n
#print 'total', running_total
bin_data.append((bl, n))
bin_data.sort()
bd = {} #'step':step_name}
for l, n in bin_data:
bd[l] = n
return bd
bd1 = step_data_to_bins(1)
bd2 = step_data_to_bins(-1)
#bd1['step'] = 'raw'
#bd2['step'] = 'final'
df = pd.DataFrame([bd1, bd2], index=['raw', 'final'])
df.rename(columns={10:'10-19',20:'20-29',
30:'30-39',40:'40-49', 50:'50-60'}, inplace=True)
ax = df.T.plot(kind='bar')
fig = plt.figure(plt.get_fignums()[0])
fig.set_size_inches(10, 5)
ax.set_ylabel('number of tracks')
ax.set_xlabel('track duration (min)')
plt.tight_layout()
plt.savefig('track-len-improvement.png')
#plt.show()
#print df
#rc_duration_hist(rc)
#
###Output
_____no_output_____
###Markdown
Generate Report Card
###Code
# N = 15
ex_id = '20141017_113435'
ex_id = '20141017_113439'
ex_id = '20141017_123722'
ex_id = '20141017_123725'
# N = 25
#ex_id = '20141017_134720'
#ex_id = '20141017_134724'
ex_id = '20141017_150959'
#ex_id = '20141017_151002'
#ex_id = '20130614_120518'
#ex_id = '20130318_131111'
#ex_id = '20130702_135704' # testset
#ex_id = '20130702_135652'
#ex_id = '20130414_140704'
#ex_id = '20130410_165326'
ex_id ='20130324_155127'
ex_id ='20141017_123725'
ex_df = '20130318_131056' # from tests for making behavior movies
experiment = Experiment(experiment_id=ex_id)
original_graph = experiment.graph
graph = original_graph.copy()
experiment.directory
solver = report_card.WaldoSolver(experiment=experiment, graph=graph)
graph2, rc = solver.run()
rc_duration_hist(rc)
def node_summary(experiment, graph, node_ids=[]):
"""
returns a dataframe with lots of data for every node specified in node_ids.
dataframe columns are:
'bl', 'components', 'f0', 'fN', 't0', 'tN',
'x_max', 'x_min', 'y_max', y_min'
params
-----
experiment: (wio.experiment object)
node_ids: (list)
"""
if not node_ids:
node_ids = graph.nodes(data=False)
frame_times = experiment.frame_times
node_movement = graph2.node_movement(experiment)
node_movement.sort(inplace=True)
node_summaries = []
for node in node_ids:
node_data = graph.node[node]
bf, df = node_data['born_f'], node_data['died_f']
t0 = frame_times[bf - 1]
tN = frame_times[df - 1]
comps = list(node_data.get('components', [node]))
comp_string = '-'.join([str(c) for c in comps])
n_summary = {'bid':node, 'f0':bf, 'fN':df, 't0':t0, 'tN':tN, 'components':comp_string}
if node in node_movement.index:
n_move = node_movement.loc[node]
for i, v in zip(n_move.index, n_move):
n_summary[str(i)] = v
else:
for i in ['x_max', 'x_min', 'y_max', 'y_min', 'bl']:
n_summary[i] = 0
node_summaries.append(n_summary)
node_info = pd.DataFrame(node_summaries)
node_info.set_index('bid', inplace=True)
node_info.sort(inplace=True)
return node_info
node_summary(experiment, graph2)
###Output
_____no_output_____
###Markdown
Graph how iterations reduce node count
###Code
l = rc.set_index('step').loc[['iter 0', 'iter 1', 'iter 2', 'iter 3']][['total-nodes', 'moving-nodes']]
l.reset_index(inplace=True)
l.rename(columns={'total-nodes':'total tracks',
'moving-nodes': 'move > 2 BL'})
l.index.name = 'iteration'
l[['total-nodes', 'moving-nodes']].plot(kind='bar')
l
def rc_worm_minutes_hist(rc):
rc = rc.copy()
rc.rename(columns={'wm_0min':'0-9', 'wm_10min':'10-19','wm_20min':'20-29',
'wm_30min':'30-39','wm_40min':'40-49', 'wm_50min':'50-60'}, inplace=True)
rc = rc[['0-9', '10-19', '20-29', '30-39', '40-49', '50-60']]
# for sanity check
#totals = rc['0'] + rc['10'] + rc['20'] + rc['30'] + rc['40'] + rc['50']
#print totals
# grab bins from column names
cols = rc.columns
bins = [(int(c[1:-3]), c) for c in cols if str(c[0]) == '>']
bins.sort(reverse=True)
#print bins
def step_data_to_bins(step):
# grab relevant step
step_data = rc.iloc[step]
#step_name = step_data['step']
return step_data
bd1 = step_data_to_bins(1)
bd2 = step_data_to_bins(-1)
#bd1['step'] = 'raw'
#bd2['step'] = 'final'
df = pd.DataFrame([bd1, bd2], index=['raw', 'final'])
#df.set_index('step', inplace=True)
ax = df.T.plot(kind='bar')
fig = plt.figure(plt.get_fignums()[0])
fig.set_size_inches(10, 5)
ax.set_ylabel('total minutes of track')
ax.set_xlabel('track duration (min)')
plt.savefig('worm-min-improvement.png')
plt.tight_layout()
#plt.show()
print df
rc_worm_minutes_hist(rc)
experiment.prepdata.load('end_report')
###Output
_____no_output_____
###Markdown
step graphs for moving nodes check if nodes that moved in raw data are still in the final graph
###Code
min_move = 1
moved_bids = [int(i) for i in graph.compound_bl_filter(experiment, threshold=min_move)]
new_moved_bids = [int(i) for i in graph2.compound_bl_filter(experiment, threshold=min_move)]
print len(moved_bids), 'nodes moved over threshold in origional graph'
print len(new_moved_bids), 'nodes moved over threshold in final graph'
###Output
_____no_output_____
###Markdown
Create step plots for before and after we 'fix' the data
###Code
true_num = experiment.true_num
steps0, durations0 = calculate_duration_data_from_graph(experiment, original_graph, original_graph.nodes(data=False))
steps, durations = calculate_duration_data_from_graph(experiment, original_graph, moved_bids)
fig, ax = plt.subplots()
step_plot(ax, steps)
ax.set_title('original blobs moving > {t} BL'.format(t=min_move))
ax.set_xlabel('minutes')
plt.show()
steps2, durations2 = calculate_duration_data_from_graph(experiment, graph2, graph2.nodes(data=False))
fig, ax = plt.subplots()
step_plot(ax, steps2)
ax.set_title('all blobs in final graph')
ax.set_xlabel('minutes')
plt.show()
moved_steps2, moved_durations2 = calculate_duration_data_from_graph(experiment, graph2, new_moved_bids)
fig, ax = plt.subplots()
step_plot(ax, moved_steps2)
ax.set_title('final blobs moving > {t} BL'.format(t=min_move))
ax.set_xlabel('minutes')
plt.show()
df = ssim.run_ideal_step_simulation(experiment, verbose=True)
fig, ax = plt.subplots()
step_plot(ax, df, true_num=experiment.true_num)
def top_step_plot(ax, step_df, min_duration=10, true_num=30):
best = step_df[step_df['lifespan'] >= 10]
print best.head()
steps = []
n_steps = len(best)
xs = list(best['t0'])
widths = list(best['lifespan'])
height = 1
color_cycle = ax._get_lines.color_cycle
color1 = color_cycle.next()
color2 = color_cycle.next()
for y, (x, width) in enumerate(zip(xs, widths)):
steps.append(patches.Rectangle((x,y), height=height, width=width,
fill=True, fc=color1, ec=color1))
for step in steps:
ax.add_patch(step)
xmax = 60
ideal = patches.Rectangle((0,0), height=true_num, width=xmax,
fill=True, ec=color2, fc=color2,
alpha=0.5)
ax.add_patch(ideal)
ax.plot([0], color=color1, label='actual')
ax.plot([0], color=color2, alpha=0.5, label='ideal')
ax.set_xlim([0, xmax])
ax.set_ylim([0, n_steps+1])
steps0, durations0 = calculate_duration_data_from_graph(experiment, original_graph, original_graph.nodes(data=False))
fig, ax = plt.subplots()
top_step_plot(ax, steps0)
ax.set_title('blobs > 10 min in raw data')
ax.set_xlabel('minutes')
plt.show()
steps2, durations2 = calculate_duration_data_from_graph(experiment, graph2, graph2.nodes(data=False))
fig, ax = plt.subplots()
top_step_plot(ax, steps2)
ax.set_title('blobs > 10 min in final data')
ax.set_xlabel('minutes')
plt.show()
def compare_step_plots(step_df1, step_df2, titles=['',''], true_num=30, i=0):
fig, (ax1, ax2) = plt.subplots(1,2)
fig.set_size_inches(20, 10)
#steps = []
n_steps = max(len(step_df1), len(step_df2))
xmax = 60
def make_steps(ax, df, i=1):
steps = []
xs = list(df['t0'])
widths = list(df['lifespan'])
height = 1
color_cycle = ax._get_lines.color_cycle
color1 = color_cycle.next()
color2 = color_cycle.next()
for y, (x, width) in enumerate(zip(xs, widths)):
if i == 0:
c = color1
else:
c = color2
steps.append(patches.Rectangle((x,y), height=height, width=width,
fill=True, fc=c, ec=c))
for step in steps:
ax.add_patch(step)
ideal = patches.Rectangle((0,0), height=true_num, width=xmax,
fill=True, ec=color2, fc=color2,
alpha=0.5)
ax.add_patch(ideal)
#if i != 2:
# ax.plot([0], color=color1, label='actual')
#else:
#ax.plot([0], color=color1, label='actual')
#ax.plot([0], color=color2, alpha=0.5, label='ideal')
ax.set_xlim([0, xmax])
ax.set_ylim([0, n_steps+1])
ax.set_xlabel('minutes')
ax.set_ylabel('tracks')
make_steps(ax1, step_df1, i=0)
ax1.set_title(titles[0])
make_steps(ax2, step_df2, i=2)
ax2.set_title(titles[1])
for ax in [ax1, ax2]:
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(20)
plt.savefig('final-step-plot.png')
titles = ['raw (blobs >10 min)', 'final (blobs >10 min)']
compare_step_plots(steps0[steps0['lifespan'] >= 10], steps2[steps2['lifespan'] >= 10], titles=titles)
def compare_step_plots2(step_df1, step_df2, true_num=30):
fig, ax = plt.subplots()
#fig.set_size_inches(20, 10)
#steps = []
n_steps = max(len(step_df1), len(step_df2))
xmax = 60
def make_steps(ax, df, alpha=1.0):
steps = []
xs = list(df['t0'])
widths = list(df['lifespan'])
height = 1
color_cycle = ax._get_lines.color_cycle
color1 = color_cycle.next()
color2 = color_cycle.next()
for y, (x, width) in enumerate(zip(xs, widths)):
steps.append(patches.Rectangle((x,y), height=height, width=width,
fill=True, fc=color1, ec=color1, alpha=alpha))
for step in steps:
ax.add_patch(step)
#ideal = patches.Rectangle((0,0), height=true_num, width=xmax,
# fill=True, ec=color2, fc=color2,
# alpha=0.5)
#ax.add_patch(ideal)
#ax.plot([0], color=color1, alpha=alpha)
ax.set_xlim([0, xmax])
ax.set_ylim([0, n_steps+1])
ax.set_xlabel('minutes')
ax.set_ylabel('tracks')
make_steps(ax, step_df1)
make_steps(ax, step_df2, alpha=0.5)
titles = ['blobs >10 min from raw data', 'blobs >10 min from final data']
compare_step_plots(steps0[steps0['lifespan'] >= 10], steps2[steps2['lifespan'] >= 10], titles=titles)
#compare_step_plots2(steps0[steps0['lifespan'] >= 10], steps2[steps2['lifespan'] >= 10])
#compare_step_plots(steps2[steps2['lifespan'] >= 10], steps0[steps0['lifespan'] >= 10])
plt.show()
###Output
_____no_output_____
###Markdown
create CDF showing the duration difference
###Code
fig, ax = plt.subplots()
fig.set_size_inches(10, 8)
make_cdf(ax, [d for d in durations0 if d>10], label= 'original nodes > 10 min', xmin=10, xmax=60)
make_cdf(ax, [d for d in durations2 if d>10], label='final nodes > 10 min', xmin=10, xmax=60)
ax.legend(loc='lower right', prop={'size':20})
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(20)
#plt.show()
plt.savefig('long-cdf-improvement.png')
fig, ax = plt.subplots()
fig.set_size_inches(10, 8)
make_cdf(ax, durations0, label= 'all original', xmax=30)
make_cdf(ax, durations2, label='all final nodes', xmax=30)
ax.legend(loc='lower right', prop={'size':20})
for item in ([ax.title, ax.xaxis.label, ax.yaxis.label] +
ax.get_xticklabels() + ax.get_yticklabels()):
item.set_fontsize(20)
#plt.show()
plt.savefig('all-cdf-improvement.png')
fig, ax = plt.subplots()
make_cdf(ax, durations0, label= 'all original', xmax=60)
make_cdf(ax, durations2, label='all final nodes', xmax=60)
make_cdf(ax, [d for d in durations0 if d>10], label= 'original nodes > 10 min', xmin=10, xmax=60)
make_cdf(ax, [d for d in durations2 if d>10], label='final nodes > 10 min', xmin=10, xmax=60)
ax.legend(loc='lower right')
plt.show()
fig, ax = plt.subplots()
make_cdf(ax, durations0, label= 'all original')
make_cdf(ax, durations, label= 'original > 2 BL')
make_cdf(ax, durations2, label='all final nodes')
make_cdf(ax, moved_durations2, label='final nodes that origionally moved > 2 BL')
ax.legend(loc='lower right')
plt.show()
###Output
_____no_output_____
###Markdown
TODO: create plots showing how applying the same threshold to our data before and after fixing it changes our True Positive / False Positives / False Negative rates
###Code
#TODO format this to work for graph based data.
def calculate_stats_for_bid_lists(bid_lists, prep_data):
base_accuracy = prep_data.load('accuracy')
matches = prep_data.load('matches')
counts, tps, fps, fns = [], [], [], []
for bids in bid_lists:
filtered_accuracy = ea.recalculate_accuracy(matches, base_accuracy, bids=bids)
counts.append(len(bids))
tp = filtered_accuracy['true-pos'].mean()
fp = filtered_accuracy['false-pos'].mean()
fn = filtered_accuracy['false-neg'].mean()
tps.append(tp)
fps.append(fp)
fns.append(fn)
tps = np.array(tps)
fps = np.array(fps)
fns = np.array(fns)
totals = fns + tps
tps_p = tps / totals * 100
fps_p = fps / totals * 100
fns_p = fns / totals * 100
print('true counts=', np.mean(totals))
data = pd.DataFrame([tps_p, fns_p, fps_p], index=['TP', 'FN', 'FP']).T
counts = pd.DataFrame(counts, columns=['counts'])
return data, counts
###Output
_____no_output_____
###Markdown
Error Checking Cells
###Code
unknown_ids, id_change_ids = show_reasons(ends)
#print unknown_ids
#print id_change_ids
unk = pd.DataFrame(unknown_ids)
name = '{eid}-unknown_ids.csv'.format(eid=ex_id)
print name
unk.to_csv(name, index=False, header=False)
col = pd.DataFrame(id_change_ids)
name = '{eid}-unresolved_collision_ids.csv'.format(eid=ex_id)
col.to_csv(name, index=False, header=False)
us = starts[starts['reason'] == 'unknown']
ue = ends[ends['reason'] == 'unknown'].reset_index()
#print us.head()
ue = ue.rename(columns={'node_id':'node1', 'bid':'blob1'})
ue = ue[['blob1', 'node1', 't', 'x', 'y']]
#print ue.head()
match_data = []
for blob2, s in starts.iterrows():
node2 = s['node_id']
x, y = s['x'], s['y']
t = s['t']
print blob2, node2, x, y, t
match_check = ue.copy()
match_check['node2'] = node2
match_check['blob2'] = node2
match_check['dt'] = t - match_check['t']
match_check['dx'] = np.fabs(x - match_check['x'])
match_check['dy'] = np.fabs(y - match_check['y'])
match_check['dist'] = np.sqrt(match_check['dt']**2 + match_check['dx']**2 + match_check['dy']**2)
match_check['d_spacetime'] = np.sqrt(match_check['dt']**2 + match_check['dx']**2 + match_check['dy']**2)
match_check = match_check[['blob1', 'blob2', 'dt', 'dist', 'd_spacetime']]
match_check.sort('d_spacetime', inplace=True)
print match_check.head()
###Output
_____no_output_____
###Markdown
these are two cells that aren't part of the workflow but that were used to sanity check the processing
###Code
def double_check_duration(experiment, graph1, graph2, node_ids):
frame_times = experiment.frame_times
step_data, durations = [], []
good_nodes = []
bad_nodes = []
for node in node_ids:
node_data = graph2.node[node]
bf, df = node_data['born'], node_data['died']
components = node_data.get('components', [])
node_checks_out = True
for c in components:
nd = graph1.node[c]
cb, cd = nd['born'], nd['died']
if cb < bf:
node_checks_out = False
if cd > df:
node_checks_out = False
if node_checks_out:
good_nodes.append(node)
else:
bad_nodes.append(node)
print len(bad_nodes), 'bad nodes'
print len(good_nodes), 'good nodes'
double_check_duration(experiment, original_graph, graph2, graph2.nodes(data=False))
print len(moved_bids), 'nodes moved over threshold in origional graph'
graph3 = original_graph.copy()
print 'orig', len(find_nodes_containing_bids(graph3, moved_bids))
collider.remove_nodes_outside_roi(graph3, experiment)
print 'roi', len(find_nodes_containing_bids(graph3, moved_bids))
collider.remove_blank_nodes(graph3, experiment)
print 'blank', len(find_nodes_containing_bids(graph3, moved_bids))
collider.removal_suite(graph3) #, assimilate=-1)
print 'suite', len(find_nodes_containing_bids(graph3, moved_bids))
bridge_gaps(experiment, graph3, threshold=0.001)
print 'gaps', len(find_nodes_containing_bids(graph3, moved_bids))
steps3, durations3 = calculate_duration_data_from_graph(experiment, graph3, graph3.nodes(data=False))
fig, ax = plt.subplots()
step_plot(ax, steps3)
ax.set_title('second try at figuring out graph')
ax.set_xlabel('minutes')
fig, ax = plt.subplots()
make_cdf(ax, durations3)
plt.show()
# WEIRD!
# old and new functions for detecting movement do not match up 100%
# new function uses generalized body lenght, old one uses individual body length ...
min_move = 3
move = experiment.prepdata.load('moved')
moved_bids = list(set(move[move['bl_moved'] >= min_move]['bid']))
moving_nodes = [int(i) for i in graph.compound_bl_filter(experiment, threshold=min_move)]
print len(moved_bids), 'nodes moved over threshold in origional graph'
print len(moving_nodes), 'found with modern methods'
print len(set(moved_bids) & set(moving_nodes))
print len(find_nodes_containing_bids(original_graph, moved_bids)), 'check. this number should be same as first'
###Output
_____no_output_____ |
experiments/baseline_ptn_32bit/metehan/trials/1/trial.ipynb | ###Markdown
PTN TemplateThis notebook serves as a template for single dataset PTN experiments It can be run on its own by setting STANDALONE to True (do a find for "STANDALONE" to see where) But it is intended to be executed as part of a *papermill.py script. See any of the experimentes with a papermill script to get started with that workflow.
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
import os, json, sys, time, random
import numpy as np
import torch
from torch.optim import Adam
from easydict import EasyDict
import matplotlib.pyplot as plt
from steves_models.steves_ptn import Steves_Prototypical_Network
from steves_utils.lazy_iterable_wrapper import Lazy_Iterable_Wrapper
from steves_utils.iterable_aggregator import Iterable_Aggregator
from steves_utils.ptn_train_eval_test_jig import PTN_Train_Eval_Test_Jig
from steves_utils.torch_sequential_builder import build_sequential
from steves_utils.torch_utils import get_dataset_metrics, ptn_confusion_by_domain_over_dataloader
from steves_utils.utils_v2 import (per_domain_accuracy_from_confusion, get_datasets_base_path)
from steves_utils.PTN.utils import independent_accuracy_assesment
from steves_utils.stratified_dataset.episodic_accessor import Episodic_Accessor_Factory
from steves_utils.ptn_do_report import (
get_loss_curve,
get_results_table,
get_parameters_table,
get_domain_accuracies,
)
from steves_utils.transforms import get_chained_transform
###Output
_____no_output_____
###Markdown
Required ParametersThese are allowed parameters, not defaultsEach of these values need to be present in the injected parameters (the notebook will raise an exception if they are not present)Papermill uses the cell tag "parameters" to inject the real parameters below this cell.Enable tags to see what I mean
###Code
required_parameters = {
"experiment_name",
"lr",
"device",
"seed",
"dataset_seed",
"labels_source",
"labels_target",
"domains_source",
"domains_target",
"num_examples_per_domain_per_label_source",
"num_examples_per_domain_per_label_target",
"n_shot",
"n_way",
"n_query",
"train_k_factor",
"val_k_factor",
"test_k_factor",
"n_epoch",
"patience",
"criteria_for_best",
"x_transforms_source",
"x_transforms_target",
"episode_transforms_source",
"episode_transforms_target",
"pickle_name",
"x_net",
"NUM_LOGS_PER_EPOCH",
"BEST_MODEL_PATH",
"torch_default_dtype"
}
standalone_parameters = {}
standalone_parameters["experiment_name"] = "STANDALONE PTN"
standalone_parameters["lr"] = 0.0001
standalone_parameters["device"] = "cuda"
standalone_parameters["seed"] = 1337
standalone_parameters["dataset_seed"] = 1337
standalone_parameters["num_examples_per_domain_per_label_source"]=100
standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_shot"] = 3
standalone_parameters["n_query"] = 2
standalone_parameters["train_k_factor"] = 1
standalone_parameters["val_k_factor"] = 2
standalone_parameters["test_k_factor"] = 2
standalone_parameters["n_epoch"] = 100
standalone_parameters["patience"] = 10
standalone_parameters["criteria_for_best"] = "target_accuracy"
standalone_parameters["x_transforms_source"] = ["unit_power"]
standalone_parameters["x_transforms_target"] = ["unit_power"]
standalone_parameters["episode_transforms_source"] = []
standalone_parameters["episode_transforms_target"] = []
standalone_parameters["torch_default_dtype"] = "torch.float32"
standalone_parameters["x_net"] = [
{"class": "nnReshape", "kargs": {"shape":[-1, 1, 2, 256]}},
{"class": "Conv2d", "kargs": { "in_channels":1, "out_channels":256, "kernel_size":(1,7), "bias":False, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":256}},
{"class": "Conv2d", "kargs": { "in_channels":256, "out_channels":80, "kernel_size":(2,7), "bias":True, "padding":(0,3), },},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features":80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 80*256, "out_features": 256}}, # 80 units per IQ pair
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features":256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
]
# Parameters relevant to results
# These parameters will basically never need to change
standalone_parameters["NUM_LOGS_PER_EPOCH"] = 10
standalone_parameters["BEST_MODEL_PATH"] = "./best_model.pth"
# uncomment for CORES dataset
from steves_utils.CORES.utils import (
ALL_NODES,
ALL_NODES_MINIMUM_1000_EXAMPLES,
ALL_DAYS
)
standalone_parameters["labels_source"] = ALL_NODES
standalone_parameters["labels_target"] = ALL_NODES
standalone_parameters["domains_source"] = [1]
standalone_parameters["domains_target"] = [2,3,4,5]
standalone_parameters["pickle_name"] = "cores.stratified_ds.2022A.pkl"
# Uncomment these for ORACLE dataset
# from steves_utils.ORACLE.utils_v2 import (
# ALL_DISTANCES_FEET,
# ALL_RUNS,
# ALL_SERIAL_NUMBERS,
# )
# standalone_parameters["labels_source"] = ALL_SERIAL_NUMBERS
# standalone_parameters["labels_target"] = ALL_SERIAL_NUMBERS
# standalone_parameters["domains_source"] = [8,20, 38,50]
# standalone_parameters["domains_target"] = [14, 26, 32, 44, 56]
# standalone_parameters["pickle_name"] = "oracle.frame_indexed.stratified_ds.2022A.pkl"
# standalone_parameters["num_examples_per_domain_per_label_source"]=1000
# standalone_parameters["num_examples_per_domain_per_label_target"]=1000
# Uncomment these for Metahan dataset
# standalone_parameters["labels_source"] = list(range(19))
# standalone_parameters["labels_target"] = list(range(19))
# standalone_parameters["domains_source"] = [0]
# standalone_parameters["domains_target"] = [1]
# standalone_parameters["pickle_name"] = "metehan.stratified_ds.2022A.pkl"
# standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# standalone_parameters["num_examples_per_domain_per_label_source"]=200
# standalone_parameters["num_examples_per_domain_per_label_target"]=100
standalone_parameters["n_way"] = len(standalone_parameters["labels_source"])
# Parameters
parameters = {
"experiment_name": "baseline_ptn_32bit_metehan",
"lr": 0.001,
"device": "cuda",
"seed": 1337,
"dataset_seed": 1337,
"labels_source": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18],
"labels_target": [0, 1, 2, 3, 4, 5, 6, 7, 8, 9, 10, 11, 12, 13, 14, 15, 16, 17, 18],
"x_transforms_source": [],
"x_transforms_target": [],
"episode_transforms_source": [],
"episode_transforms_target": [],
"num_examples_per_domain_per_label_source": 100,
"num_examples_per_domain_per_label_target": 100,
"n_shot": 3,
"n_way": 19,
"n_query": 2,
"train_k_factor": 1,
"val_k_factor": 2,
"test_k_factor": 2,
"torch_default_dtype": "torch.float32",
"n_epoch": 50,
"patience": 3,
"criteria_for_best": "target_loss",
"x_net": [
{"class": "nnReshape", "kargs": {"shape": [-1, 1, 2, 256]}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 1,
"out_channels": 256,
"kernel_size": [1, 7],
"bias": False,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 256}},
{
"class": "Conv2d",
"kargs": {
"in_channels": 256,
"out_channels": 80,
"kernel_size": [2, 7],
"bias": True,
"padding": [0, 3],
},
},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm2d", "kargs": {"num_features": 80}},
{"class": "Flatten", "kargs": {}},
{"class": "Linear", "kargs": {"in_features": 20480, "out_features": 256}},
{"class": "ReLU", "kargs": {"inplace": True}},
{"class": "BatchNorm1d", "kargs": {"num_features": 256}},
{"class": "Linear", "kargs": {"in_features": 256, "out_features": 256}},
],
"NUM_LOGS_PER_EPOCH": 10,
"BEST_MODEL_PATH": "./best_model.pth",
"pickle_name": "metehan.stratified_ds.2022A.pkl",
"domains_source": [1],
"domains_target": [0, 2],
}
# Set this to True if you want to run this template directly
STANDALONE = False
if STANDALONE:
print("parameters not injected, running with standalone_parameters")
parameters = standalone_parameters
if not 'parameters' in locals() and not 'parameters' in globals():
raise Exception("Parameter injection failed")
#Use an easy dict for all the parameters
p = EasyDict(parameters)
supplied_keys = set(p.keys())
if supplied_keys != required_parameters:
print("Parameters are incorrect")
if len(supplied_keys - required_parameters)>0: print("Shouldn't have:", str(supplied_keys - required_parameters))
if len(required_parameters - supplied_keys)>0: print("Need to have:", str(required_parameters - supplied_keys))
raise RuntimeError("Parameters are incorrect")
###################################
# Set the RNGs and make it all deterministic
###################################
np.random.seed(p.seed)
random.seed(p.seed)
torch.manual_seed(p.seed)
torch.use_deterministic_algorithms(True)
###########################################
# The stratified datasets honor this
###########################################
torch.set_default_dtype(eval(p.torch_default_dtype))
###################################
# Build the network(s)
# Note: It's critical to do this AFTER setting the RNG
# (This is due to the randomized initial weights)
###################################
x_net = build_sequential(p.x_net)
start_time_secs = time.time()
###################################
# Build the dataset
###################################
if p.x_transforms_source == []: x_transform_source = None
else: x_transform_source = get_chained_transform(p.x_transforms_source)
if p.x_transforms_target == []: x_transform_target = None
else: x_transform_target = get_chained_transform(p.x_transforms_target)
if p.episode_transforms_source == []: episode_transform_source = None
else: raise Exception("episode_transform_source not implemented")
if p.episode_transforms_target == []: episode_transform_target = None
else: raise Exception("episode_transform_target not implemented")
eaf_source = Episodic_Accessor_Factory(
labels=p.labels_source,
domains=p.domains_source,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_source,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_source,
example_transform_func=episode_transform_source,
)
train_original_source, val_original_source, test_original_source = eaf_source.get_train(), eaf_source.get_val(), eaf_source.get_test()
eaf_target = Episodic_Accessor_Factory(
labels=p.labels_target,
domains=p.domains_target,
num_examples_per_domain_per_label=p.num_examples_per_domain_per_label_target,
iterator_seed=p.seed,
dataset_seed=p.dataset_seed,
n_shot=p.n_shot,
n_way=p.n_way,
n_query=p.n_query,
train_val_test_k_factors=(p.train_k_factor,p.val_k_factor,p.test_k_factor),
pickle_path=os.path.join(get_datasets_base_path(), p.pickle_name),
x_transform_func=x_transform_target,
example_transform_func=episode_transform_target,
)
train_original_target, val_original_target, test_original_target = eaf_target.get_train(), eaf_target.get_val(), eaf_target.get_test()
transform_lambda = lambda ex: ex[1] # Original is (<domain>, <episode>) so we strip down to episode only
train_processed_source = Lazy_Iterable_Wrapper(train_original_source, transform_lambda)
val_processed_source = Lazy_Iterable_Wrapper(val_original_source, transform_lambda)
test_processed_source = Lazy_Iterable_Wrapper(test_original_source, transform_lambda)
train_processed_target = Lazy_Iterable_Wrapper(train_original_target, transform_lambda)
val_processed_target = Lazy_Iterable_Wrapper(val_original_target, transform_lambda)
test_processed_target = Lazy_Iterable_Wrapper(test_original_target, transform_lambda)
datasets = EasyDict({
"source": {
"original": {"train":train_original_source, "val":val_original_source, "test":test_original_source},
"processed": {"train":train_processed_source, "val":val_processed_source, "test":test_processed_source}
},
"target": {
"original": {"train":train_original_target, "val":val_original_target, "test":test_original_target},
"processed": {"train":train_processed_target, "val":val_processed_target, "test":test_processed_target}
},
})
# Some quick unit tests on the data
from steves_utils.transforms import get_average_power, get_average_magnitude
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_source))
assert q_x.dtype == eval(p.torch_default_dtype)
assert s_x.dtype == eval(p.torch_default_dtype)
print("Visually inspect these to see if they line up with expected values given the transforms")
print('x_transforms_source', p.x_transforms_source)
print('x_transforms_target', p.x_transforms_target)
print("Average magnitude, source:", get_average_magnitude(q_x[0].numpy()))
print("Average power, source:", get_average_power(q_x[0].numpy()))
q_x, q_y, s_x, s_y, truth = next(iter(train_processed_target))
print("Average magnitude, target:", get_average_magnitude(q_x[0].numpy()))
print("Average power, target:", get_average_power(q_x[0].numpy()))
###################################
# Build the model
###################################
model = Steves_Prototypical_Network(x_net, device=p.device, x_shape=(2,256))
optimizer = Adam(params=model.parameters(), lr=p.lr)
###################################
# train
###################################
jig = PTN_Train_Eval_Test_Jig(model, p.BEST_MODEL_PATH, p.device)
jig.train(
train_iterable=datasets.source.processed.train,
source_val_iterable=datasets.source.processed.val,
target_val_iterable=datasets.target.processed.val,
num_epochs=p.n_epoch,
num_logs_per_epoch=p.NUM_LOGS_PER_EPOCH,
patience=p.patience,
optimizer=optimizer,
criteria_for_best=p.criteria_for_best,
)
total_experiment_time_secs = time.time() - start_time_secs
###################################
# Evaluate the model
###################################
source_test_label_accuracy, source_test_label_loss = jig.test(datasets.source.processed.test)
target_test_label_accuracy, target_test_label_loss = jig.test(datasets.target.processed.test)
source_val_label_accuracy, source_val_label_loss = jig.test(datasets.source.processed.val)
target_val_label_accuracy, target_val_label_loss = jig.test(datasets.target.processed.val)
history = jig.get_history()
total_epochs_trained = len(history["epoch_indices"])
val_dl = Iterable_Aggregator((datasets.source.original.val,datasets.target.original.val))
confusion = ptn_confusion_by_domain_over_dataloader(model, p.device, val_dl)
per_domain_accuracy = per_domain_accuracy_from_confusion(confusion)
# Add a key to per_domain_accuracy for if it was a source domain
for domain, accuracy in per_domain_accuracy.items():
per_domain_accuracy[domain] = {
"accuracy": accuracy,
"source?": domain in p.domains_source
}
# Do an independent accuracy assesment JUST TO BE SURE!
# _source_test_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.test, p.device)
# _target_test_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.test, p.device)
# _source_val_label_accuracy = independent_accuracy_assesment(model, datasets.source.processed.val, p.device)
# _target_val_label_accuracy = independent_accuracy_assesment(model, datasets.target.processed.val, p.device)
# assert(_source_test_label_accuracy == source_test_label_accuracy)
# assert(_target_test_label_accuracy == target_test_label_accuracy)
# assert(_source_val_label_accuracy == source_val_label_accuracy)
# assert(_target_val_label_accuracy == target_val_label_accuracy)
experiment = {
"experiment_name": p.experiment_name,
"parameters": dict(p),
"results": {
"source_test_label_accuracy": source_test_label_accuracy,
"source_test_label_loss": source_test_label_loss,
"target_test_label_accuracy": target_test_label_accuracy,
"target_test_label_loss": target_test_label_loss,
"source_val_label_accuracy": source_val_label_accuracy,
"source_val_label_loss": source_val_label_loss,
"target_val_label_accuracy": target_val_label_accuracy,
"target_val_label_loss": target_val_label_loss,
"total_epochs_trained": total_epochs_trained,
"total_experiment_time_secs": total_experiment_time_secs,
"confusion": confusion,
"per_domain_accuracy": per_domain_accuracy,
},
"history": history,
"dataset_metrics": get_dataset_metrics(datasets, "ptn"),
}
ax = get_loss_curve(experiment)
plt.show()
get_results_table(experiment)
get_domain_accuracies(experiment)
print("Source Test Label Accuracy:", experiment["results"]["source_test_label_accuracy"], "Target Test Label Accuracy:", experiment["results"]["target_test_label_accuracy"])
print("Source Val Label Accuracy:", experiment["results"]["source_val_label_accuracy"], "Target Val Label Accuracy:", experiment["results"]["target_val_label_accuracy"])
json.dumps(experiment)
###Output
_____no_output_____ |
math/1. Mean, Median and Mode.ipynb | ###Markdown
1. Mean, Median and Mode
###Code
import pandas as pd
import seaborn as sns
import matplotlib.pyplot as plt
import numpy as np
###Output
_____no_output_____
###Markdown
Visualization styling code
###Code
sns.set_context('talk')
df_tips = pd.read_csv('/Users/surendra/workspaces/python-remote-workspace/notebooks/datasets/tips.csv')
###Output
_____no_output_____
###Markdown
Data set|column name|column description||:---:|:---:|| total_bill | financial amount of meal in U.S. dollars ||tip|financial amount of the meal's tip in U.S. dollars||sex|gender of server||smoker|boolean to represent if server smokes or not||day|day of week||time|meal name (Lunch or Dinner)||size|count of people eating meal|
###Code
df_tips.head()
df_tips['total_bill'].plot(kind='hist', figsize=(6, 6), linewidth=1, color='whitesmoke', edgecolor='gray')
plt.xlabel("total bill [$]", labelpad=15, fontsize=13)
plt.ylabel("frequency", labelpad=15, fontsize=13)
plt.title("Distribution of Total Bill Amounts", y=1.012, fontsize=13);
###Output
_____no_output_____
###Markdown
---The `mode` is the `most frequently occuring` value in a field.
###Code
mode_total_bill = df_tips['total_bill'].mode()
type(mode_total_bill)
len(mode_total_bill)
mode_total_bill[0]
# df_tips.query('total_bill==13.42')
df_tips.query('total_bill==@mode_total_bill[0]')
###Output
_____no_output_____
###Markdown
---Once a set of values are `sorted`, the median is the `middle value`.
###Code
median_total_bill = df_tips['total_bill'].median()
median_total_bill
###Output
_____no_output_____ |
notebooks/tutorials/Projections.ipynb | ###Markdown
Working with Projections[Projections](https://en.wikipedia.org/wiki/Map_projection) are ``geoplot``'s killer feature. Our explaination is by example. Throughout this segment of the tutorial we'll use the `polyplot` plot type, which faithfully displays whatever geometry put into it. Projection and unprojection
###Code
import geopandas as gpd
import geoplot as gplt
%matplotlib inline
# Load the example data.
# All of the examples in this notebook use the `quilt` package to do this.
from quilt.data.ResidentMario import geoplot_data
contiguous_usa = gpd.read_file(geoplot_data.contiguous_usa())
gplt.polyplot(contiguous_usa).set_aspect("equal")
###Output
_____no_output_____
###Markdown
This map is an example of an unprojected plot: it reproduces our coordinates as if they were on a flat Cartesian plane. But remember, the Earth is not a flat surface; it's a sphere. This isn't a map of the United States that you'd seen in print anywhere because it badly distorts both of the [two criteria](http://www.geo.hunter.cuny.edu/~jochen/gtech201/lectures/lec6concepts/Map%20coordinate%20systems/How%20to%20choose%20a%20projection.htm) most projections are evaluated on: *shape* and *area*. For sufficiently small areas, the amount of distortion is very small. This map of New York City, for example, is reasonably accurate:
###Code
boroughs = gpd.read_file(geoplot_data.nyc_boroughs())
gplt.polyplot(boroughs)
###Output
_____no_output_____
###Markdown
More consequentially, ``geoplot`` returns unprojected plots as pure ``matplotlib`` ``AxesSubplot`` objects, while projected plots, which require heavy-lifting in the form of coordinate transformations, are returned as ``cartopy`` ``GeoAxesSubplot`` objects. To understand why this matters, let's first take a look at how we generate projected plots, and what they look like.In the case of the contiguous United States, the projection most often used is known as the [Albers Equal Area projection](https://en.wikipedia.org/wiki/Albers_projection). This projection works by wrapping the Earth around a cone, one that's particularly well optimized for locations near the middle of the Northern Hemisphere (and particularly poorly for locations at the poles). To place our plot in a projection, we need to pass the projection of interest to the `projection` keyword parameter. ``geoplot`` functions expect input to come from the `geoplot.crs` module, imported as ``gcrs`` by convention.
###Code
import geoplot.crs as gcrs
gplt.polyplot(contiguous_usa, projection=gcrs.AlbersEqualArea())
###Output
_____no_output_____
###Markdown
``geoplot`` projections are actually a very thin wrapper on ``cartopy.crs`` projections, and every ``cartopy`` projection is implemented in ``geoplot.crs``. Refer to [this page](http://scitools.org.uk/cartopy/docs/latest/crs/projections.html) to see the list of projections that ``geoplot`` implements.<!--You may be wondering, if ``geoplot.crs`` is a wrapper on ``cartopy.crs``, why not just use Cartopy CRS objects directly? This comes down to an important implementation detail: when Cartopy CRS objects are used as the library intends for them to be used, projection geolocation settings are supposed to be defined as parameters to the projection and cannot be modified after instantiation. This means that if you don't explicitly specify otherwise yourself, a Cartopy CRS object will result in a map centered on mid-Africa—coordinate `(0, 0)`!``geoplot`` avoids forcing this extra work on the user by computing sensible defaults, based on the data provided, when exact settings are not provided. This is why the plot above "just works": ``geoplot`` computed the mean centroid of the polygons and centered the plot on that coordinate in the background. This feature comes at the cost of a little bit of awkwardness, requiring our wrapper classes, but overall the tradeoff seems to be very "worth it".-->At this time, the defaults are still a work in progress, however. If you look closely at this figure you'll notice that our copy of the United States is ever so slightly skewed downwards and to the right, indicating that the default settings ``geoplot`` calculates for us are off. We can correct this by specifying center coordinates ourselves.The [center of the contiguous United States](https://en.wikipedia.org/wiki/Geographic_center_of_the_contiguous_United_States) is 39°50′N 98°35′W. If we provide approximately these coordinates as `central_latitude` and `central_longitude` coordinates to our projection, our skew is fixed!
###Code
gplt.polyplot(contiguous_usa,
projection=gcrs.AlbersEqualArea(central_longitude=-98,
central_latitude=39.5))
###Output
_____no_output_____
###Markdown
This is the version of the map of the United States that you're probably most familiar with. Tips and tricks``geoplot`` comes with most of the projections provided by ``cartopy`` built-in. Of particular value are global projections, which provide a way of visualizing your data on top of an actual sphere:
###Code
ax = gplt.polyplot(contiguous_usa,
projection=gcrs.Orthographic(central_longitude=-98))
ax.set_global()
ax.outline_patch.set_visible(True)
ax.coastlines()
# ax.stock_img()
###Output
_____no_output_____
###Markdown
This example also generates another ``cartopy`` feature: the fact that ``cartopy`` has several different low-resolution geographies built-in. In this case, we also tossed the world coastlines onto the map. Also note the use of `set_global` here to quickly set the plotting context to the entire world.To see more like this, check out the [Los Angeles Flights](../../examples/los-angeles-flights.html) example in the Gallery.Now, recall that ``geoplot`` returns unprojected plots as pure ``matplotlib`` ``AxesSubplot`` objects, while projected plots are returned as ``cartopy`` ``GeoAxesSubplot`` objects. But ``cartopy`` ``GeoAxesSublot`` objects cannot be colocated with ``matplotlib`` ``AxesSubplot`` objects, nor vice versa! Once you have a graph, you're stuck in whatever "ecosystem" you chose to be in at runtime. This is the major reason why we even bother providing an option to get "inferior-looking" ``AxesSubplot`` output at all: because it can integrated with other "stuff" in the wider ``matplotlib`` ecosystem.For example, consider the ``mplleaflet`` library. ``mplleaflet`` is a small library which allows you to place ``matplotlib`` plots on an interactive [Leaflet](http://leafletjs.com/) webmap:
###Code
import mplleaflet
gplt.polyplot(boroughs)
mplleaflet.show()
pass
from IPython.display import Image
Image("./figures/leaflet-webmap-example.png")
###Output
_____no_output_____ |
examples/reference/containers/bokeh/NdLayout.ipynb | ###Markdown
Title NdLayout Container Dependencies Bokeh Backends Bokeh Matplotlib Plotly
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
An ``NdLayout`` is a multi-dimensional dictionary of HoloViews elements presented side-by-side like a ``Layout``. An ``NdLayout`` can be considered as a special-case of ``HoloMap`` that can hold any one type of HoloViews container or element as long as it isn't another ``NdLayout`` or ``Layout``. Unlike a regular ``Layout`` that can be built with the ``+`` operator, the items in an ``NdOverlay`` container have corresponding keys and must all have the same type. See the [Building Composite Objects](../../../user_guide/06-Building_Composite_Objects.ipynb) user guide for details on how to compose containers. ``NdLayout`` holds dictionaries Using the ``sine_curve`` function below, we can declare a dictionary of ``Curve`` elements, where the keys correspond to the frequency values:
###Code
frequencies = [0.5, 0.75, 1.0, 1.25]
def sine_curve(phase, freq):
xvals = [0.1* i for i in range(100)]
return hv.Curve((xvals, [np.sin(phase+freq*x) for x in xvals]))
curve_dict = {f:sine_curve(0,f) for f in frequencies}
###Output
_____no_output_____
###Markdown
We now have a dictionary where the frequency is the key and the corresponding curve element is the value. We can now turn this dictionary into an ``NdLayout`` by declaring the keys as corresponding to the frequency key dimension:
###Code
NdLayout = hv.NdLayout(curve_dict, kdims='frequency')
NdLayout
###Output
_____no_output_____
###Markdown
``NdLayout`` is multi-dimensional By using tuple keys and making sure each position in the tuple is assigned a corresponding ``kdim``, ``NdLayouts`` allow visualization of a multi-dimensional space:
###Code
curve_dict_2D = {(p,f):sine_curve(p,f) for p in [0, np.pi/2] for f in [0.5, 0.75]}
NdLayout = hv.NdLayout(curve_dict_2D, kdims=['phase', 'frequency'])
NdLayout
###Output
_____no_output_____
###Markdown
``NdLayout`` is similar to ``HoloMap`` Other than the difference in the visual semantics, whereby ``NdLayout`` displays its contents overlaid, ``NdLayout`` are very similar to ``HoloMap`` (see the [``HoloMap``](./HoloMap.ipynb) notebook for more information).One way to demonstrate the similarity of these two containers is to cast our ``NdLayout`` object to ``HoloMap``:
###Code
hv.HoloMap(NdLayout)
###Output
_____no_output_____
###Markdown
Title NdLayout Container Dependencies Matplotlib Backends Bokeh Matplotlib
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
An ``NdLayout`` is a multi-dimensional dictionary of HoloViews elements presented side-by-side like a ``Layout``. An ``NdLayout`` can be considered as a special-case of ``HoloMap`` that can hold any one type of HoloViews container or element as long as it isn't another ``NdLayout`` or ``Layout``. Unlike a regular ``Layout`` that can be built with the ``+`` operator, the items in an ``NdOverlay`` container have corresponding keys and must all have the same type. See the [Building Composite Objects](../../../user_guide/05-Building_Composite_Objects.ipynb) user guide for details on how to compose containers. ``NdLayout`` holds dictionaries Using the ``sine_curve`` function below, we can declare a dictionary of ``Curve`` elements, where the keys correspond to the frequency values:
###Code
frequencies = [0.5, 0.75, 1.0, 1.25]
def sine_curve(phase, freq):
xvals = [0.1* i for i in range(100)]
return hv.Curve((xvals, [np.sin(phase+freq*x) for x in xvals]))
curve_dict = {f:sine_curve(0,f) for f in frequencies}
###Output
_____no_output_____
###Markdown
We now have a dictionary where the frequency is the key and the corresponding curve element is the value. We can now turn this dictionary into an ``NdLayout`` by declaring the keys as corresponding to the frequency key dimension:
###Code
NdLayout = hv.NdLayout(curve_dict, kdims=['frequency'])
NdLayout
###Output
_____no_output_____
###Markdown
``NdLayout`` is multi-dimensional By using tuple keys and making sure each position in the tuple is assigned a corresponding ``kdim``, ``NdLayouts`` allow visualization of a multi-dimensional space:
###Code
curve_dict_2D = {(p,f):sine_curve(p,f) for p in [0, np.pi/2] for f in [0.5, 0.75]}
NdLayout = hv.NdLayout(curve_dict_2D, kdims=['phase', 'frequency'])
NdLayout
###Output
_____no_output_____
###Markdown
``NdLayout`` is similar to ``HoloMap`` Other than the difference in the visual semantics, whereby ``NdLayout`` displays its contents overlaid, ``NdLayout`` are very similar to ``HoloMap`` (see the [``HoloMap``](./HoloMap.ipynb) notebook for more information).One way to demonstrate the similarity of these two containers is to cast our ``NdLayout`` object to ``HoloMap``:
###Code
hv.HoloMap(NdLayout)
###Output
_____no_output_____
###Markdown
Title NdLayout Container Dependencies Matplotlib Backends Bokeh Matplotlib
###Code
import numpy as np
import holoviews as hv
hv.extension('bokeh')
###Output
_____no_output_____
###Markdown
An ``NdLayout`` is a multi-dimensional dictionary of HoloViews elements presented side-by-side like a ``Layout``. An ``NdLayout`` can be considered as a special-case of ``HoloMap`` that can hold any one type of HoloViews container or element as long as it isn't another ``NdLayout`` or ``Layout``. Unlike a regular ``Layout`` that can be built with the ``+`` operator, the items in an ``NdOverlay`` container have corresponding keys and must all have the same type. See the [Building Composite Objects](../../../user_guide/05-Building_Composite_Objects.ipynb) user guide for details on how to compose containers. ``NdLayout`` holds dictionaries Using the ``sine_curve`` function below, we can declare a dictionary of ``Curve`` elements, where the keys correspond to the frequency values:
###Code
frequencies = [0.5, 0.75, 1.0, 1.25]
def sine_curve(phase, freq):
xvals = [0.1* i for i in range(100)]
return hv.Curve((xvals, [np.sin(phase+freq*x) for x in xvals]))
curve_dict = {f:sine_curve(0,f) for f in frequencies}
###Output
_____no_output_____
###Markdown
We now have a dictionary where the frequency is the key and the corresponding curve element is the value. We can now turn this dictionary into an ``NdLayout`` by declaring the keys as corresponding to the frequency key dimension:
###Code
NdLayout = hv.NdLayout(curve_dict, kdims='frequency')
NdLayout
###Output
_____no_output_____
###Markdown
``NdLayout`` is multi-dimensional By using tuple keys and making sure each position in the tuple is assigned a corresponding ``kdim``, ``NdLayouts`` allow visualization of a multi-dimensional space:
###Code
curve_dict_2D = {(p,f):sine_curve(p,f) for p in [0, np.pi/2] for f in [0.5, 0.75]}
NdLayout = hv.NdLayout(curve_dict_2D, kdims=['phase', 'frequency'])
NdLayout
###Output
_____no_output_____
###Markdown
``NdLayout`` is similar to ``HoloMap`` Other than the difference in the visual semantics, whereby ``NdLayout`` displays its contents overlaid, ``NdLayout`` are very similar to ``HoloMap`` (see the [``HoloMap``](./HoloMap.ipynb) notebook for more information).One way to demonstrate the similarity of these two containers is to cast our ``NdLayout`` object to ``HoloMap``:
###Code
hv.HoloMap(NdLayout)
###Output
_____no_output_____ |
neural networks/fundamentals and captchas/fundamentals_and_captchas.ipynb | ###Markdown
Neural NetworksNeural Networks created using concepts as described in [Data Science from Scratch, 2nd Edition](http://shop.oreilly.com/product/0636920179337.do) on neural networks.
###Code
import math
import random
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Perceptron Simplest neural network, that approximates a single neuron with n binary inputs. It computes a weightes sum of its inputs and "fires" if that weighted sum is zero or greater.
###Code
def step_function(x):
return 1 if x >= 0 else 0
def dot(v, w):
"""
dot product of v and w
"""
return sum(v_i * w_i for v_i, w_i in zip(v, w))
def perceptron(weights, bias, x):
"""returns 1 if the perceptron 'fires', 0 if not"""
return step_function(dot(weights, x) + bias)
###Output
_____no_output_____
###Markdown
AND Gate
###Code
weights = [2, 2]
bias = -3
perceptron(weights, bias, [1, 1])
###Output
_____no_output_____
###Markdown
OR Gate
###Code
weights = [2, 2]
bias = -1
perceptron(weights, bias, [0, 0])
###Decision Space for a Two-Input Perceptron
###Output
_____no_output_____
###Markdown
Feed-Forward Neural Networks
###Code
def sigmoid(t):
return 1 / (1 + math.exp(-t))
z = [zi/50 - 10 for zi in range(1000)]
plt.plot(z, [step_function(zi) for zi in z], '--', label='step')
plt.plot(z, [sigmoid(zi) for zi in z], label='sigmoid')
plt.title('step vs sigmoid')
plt.legend()
plt.show()
###Output
_____no_output_____
###Markdown
Why sigmoid?In order to train a neuralnetwork, we’ll need to use calculus, and in order to use calculus, we need smoothfunctions. The step function isn’t even continuous, and sigmoid is a good smoothapproximation of it.
###Code
def neuron_output(weights, input_with_bias):
return sigmoid(dot(weights, input_with_bias))
def feed_forward(neural_network, input_vector):
"""takes in a neural network (represented as a list of lists of lists of weights)
and returns the output from forward-propagating the input"""
outputs = []
for layer in neural_network:
input_with_bias = input_vector + [1]
output = [neuron_output(neuron, input_with_bias)
for neuron in layer]
outputs.append(output)
# the input to the next layer is the output of this one
input_vector = output
return outputs
###Output
_____no_output_____
###Markdown
Building an XOR Gate
###Code
xor_network = [
[[20, 20, -30], [20, 20, -10]],
[[-60, 60, -30]]
]
outputs = feed_forward(xor_network,[0, 1])
outputs
for x in [0,1]:
for y in [0,1]:
print(x, y, feed_forward(xor_network,[x, y])[-1])
###Output
0 0 [9.38314668300676e-14]
0 1 [0.9999999999999059]
1 0 [0.9999999999999059]
1 1 [9.383146683006828e-14]
###Markdown
Backpropagation
###Code
def backpropagate(network, input_vector, targets):
"""
Adjust the weights in the network in place based on a single
labeled training example.
"""
hidden_outputs, outputs = feed_forward(network, input_vector)
# the output * (1 - output) is from the derivative of sigmoid
output_deltas = [output * (1 - output) * (output - target)
for output, target in zip(outputs, targets)]
# adjust weights for output layer, one neuron at a time
for i, output_neuron in enumerate(network[-1]):
# focus on the ith output layer neuron
for j, hidden_output in enumerate(hidden_outputs + [1]):
# adjust the jth weight based on both
# this neuron's delta and its jth input
output_neuron[j] -= output_deltas[i] * hidden_output
# back-propagate errors to hidden layer
hidden_deltas = [hidden_output * (1 - hidden_output) *
dot(output_deltas, [n[i] for n in network[-1]])
for i, hidden_output in enumerate(hidden_outputs)]
# adjust weights for hidden layer, one neuron at a time
for i, hidden_neuron in enumerate(network[0]):
for j, input in enumerate(input_vector + [1]):
hidden_neuron[j] -= hidden_deltas[i] * input
###Output
_____no_output_____
###Markdown
train XOR Randomly initialize
###Code
nn = [
[[random.normalvariate(0, 1/3), random.normalvariate(0, 1/3), random.normalvariate(0, 1/3)],
[random.normalvariate(0, 1/3), random.normalvariate(0, 1/3), random.normalvariate(0, 1/3)]],
[[random.normalvariate(0, 1/3), random.normalvariate(0, 1/3), random.normalvariate(0, 1/3)]]
]
###Output
_____no_output_____
###Markdown
train network
###Code
training_set = [
[[0,0], [0]],
[[0,1], [1]],
[[1,0], [1]],
[[1,1], [0]]
]
for _ in range(10000):
for input_vector, targets in random.sample(training_set, len(training_set)):
backpropagate(nn, input_vector, targets)
nn
###Output
_____no_output_____
###Markdown
Predict
###Code
for input_vector, targets in training_set:
output = feed_forward(nn, input_vector)
print(input_vector, output[-1])
###Output
[0, 0] [0.020747816587668888]
[0, 1] [0.9763200141110439]
[1, 0] [0.976319741551754]
[1, 1] [0.027675706528516506]
###Markdown
Example: Defeating a CAPTCHA
###Code
raw_digits = [
"""11111
1...1
1...1
1...1
11111""",
"""..1..
..1..
..1..
..1..
..1..""",
"""11111
....1
11111
1....
11111""",
"""11111
....1
11111
....1
11111""",
"""1...1
1...1
11111
....1
....1""",
"""11111
1....
11111
....1
11111""",
"""11111
1....
11111
1...1
11111""",
"""11111
....1
....1
....1
....1""",
"""11111
1...1
11111
1...1
11111""",
"""11111
1...1
11111
....1
11111"""]
#Obtain the 25 character list, for each digit, that represents the digit.
def make_digit(raw_digit):
return [1 if c == '1' else 0
for row in raw_digit.split("\n")
for c in row.strip()]
inputs = [make_digit(digit) for digit in raw_digits]
# one-hot encode digits 0-9
targets = [[1 if i == j else 0 for i in range(10)] for j in range(10)]
random.seed(0) # to get repeatable results
input_size = 25 # each input is a vector of length 25
num_hidden = 5 # we'll have 5 neurons in the hidden layer
output_size = 10 # we need 10 outputs for each input
###Output
_____no_output_____
###Markdown
Randomly initialize weights
###Code
# each hidden neuron has one weight per input, plus a bias weight
hidden_layer = [[random.random() for _ in range(input_size + 1)]
for _ in range(num_hidden)]
# each output neuron has one weight per hidden neuron, plus a bias weight
output_layer = [[random.random() for _ in range(num_hidden + 1)]
for _ in range(output_size)]
# the network starts out with random weights
network = [hidden_layer, output_layer]
for _ in range(10000):
for input_vector, target_vector in zip(inputs, targets):
backpropagate(network, input_vector, target_vector)
def predict(input):
return feed_forward(network, input)[-1]
predict(inputs[7])
predict([0,1,1,1,0,
0,0,0,1,1,
0,0,1,1,0,
0,0,0,1,1,
0,1,1,1,0])
predict([0,1,1,1,0, # .@@@.
1,0,0,1,1, # @..@@
0,1,1,1,0, # .@@@.
1,0,0,1,1, # @..@@
0,1,1,1,0]) # .@@@.
import matplotlib
def patch(x, y, hatch, color):
"""
return a matplotlib 'patch' object with the specified
location, crosshatch pattern, and color
"""
return matplotlib.patches.Rectangle((x - 0.5, y - 0.5),
1, 1,
hatch=hatch,
fill=False,
color=color)
###Output
_____no_output_____
###Markdown
Displaying the Hidden Layer Display weights on 1st neuron
###Code
weights = network[0][0]
abs_weights = [abs(w) for w in weights]
grid = [abs_weights[row:(row+5)]
for row in range(0,25,5)]
ax = plt.gca()
ax.imshow(grid,
cmap=matplotlib.cm.binary,
interpolation='none')
# cross-hatch the negative weights
for i in range(5):
for j in range(5):
if weights[5*i + j] < 0:
ax.add_patch(patch(j, i, '/', "white"))
ax.add_patch(patch(j, i, '\\', "black"))
plt.show()
###Output
_____no_output_____
###Markdown
Display the entire hidden layer
###Code
plt.subplots_adjust(wspace=0.5)
fig, ax = plt.subplots(figsize=(18, 10))
for k in range(len(network[0])):
weights = network[0][k]
abs_weights = [abs(w) for w in weights]
grid = [abs_weights[row:(row+5)] for row in range(0,25,5)]
ax=plt.subplot(1, 5, k+1)
plt.gca().set_xticks([0,1,2,3,4])
plt.gca().set_yticks([0,1,2,3,4])
ax.imshow(grid,
cmap=matplotlib.cm.binary,
interpolation='none')
# cross-hatch the negative weights
for i in range(5):
for j in range(5):
if weights[5*i + j] < 0:
ax.add_patch(patch(j, i, '/', "white"))
ax.add_patch(patch(j, i, '\\', "black"))
plt.show()
###Output
_____no_output_____
###Markdown
Working with some more inputs
###Code
left_column_only = [1, 0, 0, 0, 0] * 5
print(feed_forward(network, left_column_only)[0][0])
center_middle_row=[0,0,0,0,0]*2+[0,1,1,1,0]+[0,0,0,0,0]*2
print(feed_forward(network, center_middle_row)[0][0])
right_column_only = [0, 0, 0, 0, 1] * 5
print(feed_forward(network, right_column_only)[0][0])
my_three = [0,1,1,1,0, # .@@@.
0,0,0,1,1, # ...@@
0,0,1,1,0, # ..@@.
0,0,0,1,1, # ...@@
0,1,1,1,0] # .@@@.
hidden, output = feed_forward(network, my_three)
###Output
_____no_output_____ |
notebooks/.ipynb_checkpoints/A2. Linear Regression - Data Exploration - Lending Club-checkpoint.ipynb | ###Markdown
Linear Regression Data Exploration: Lending Club=========*** How can I predict interest rates based on borrower and loan attributes?The [Lending Club](http://www.lendingclub.com) is a peer-to-peer lending site where members make loans to each other.The site makes anonymized data on loans and borrowers publicly available. We're going to use these data to explore how the interest rate charged on loans depends on various factors.We want to explore these data, try to gain some insights into what might be useful in creating a linear regression model, and to separate out "the noise".We follow these steps, something we will do in future for other data sets as well.1. Browse the data 2. Data cleanup 3. Visual exploration 4. Model derivation I. Browse Data The data have the following variables (with data type and explanation of meaning) * __Amount.Requested__ - _numeric_. The amount (in dollars) requested in the loan application. * __Amount.Funded.By.Investors__ - _numeric_. The amount (in dollars) loaned to the individual. * Interest.rate – character. The lending interest rate charged to the borrower. * Loan.length - character. The length of time (in months) of the loan. * Loan.Purpose – categorical variable. The purpose of the loan as stated by the applicant. * Debt.to.Income.Ratio – character The % of consumer’s gross income going toward paying debts. * State - character. The abbreviation for the U.S. state of residence of the loan applicant. * Home.ownership - character. Indicates whether the applicant owns, rents, or has a mortgage. * Monthly.income - categorical. The monthly income of the applicant (in dollars). * FICO.range – categorical (expressed as a string label e.g. “650-655”). A range indicating the applicants FICO score. * Open.CREDIT.Lines - numeric. The number of open lines of credit at the time of application. * Revolving.CREDIT.Balance - numeric. The total amount outstanding all lines of credit. * Inquiries.in.the.Last.6.Months - numeric. Number of credit inquiries in the previous 6 months. * Employment.Length - character. Length of time employed at current job. II. Data Cleanup We find the data are "messy" i.e aren't cleanly prepared for import - for instance numeric columns might have some strings in them. This is very common in raw data especially that obtained from web sites.Let's take a look. we're going to look at the first five rows of some specific columns that show the data dirtiness issues.
###Code
# first we ingest the data from the source on the web
# this contains a reduced version of the data set from Lending Club
import pandas as pd
loansData = pd.read_csv('https://spark-public.s3.amazonaws.com/dataanalysis/loansData.csv')
loansData['Interest.Rate'][0:5] # first five rows of Interest.Rate
loansData['Loan.Length'][0:5] # first five rows of Loan.Length
###Output
_____no_output_____
###Markdown
We see here that:* the interest rate information has "%" symbols in it.* loan length has " months" in itOther than that we can also see (exploration exercise):* there are a couple of values that are so large they must be typos* some values are missing "NA" values i.e. not available.* the FICO Range is really a numeric entity but is represented as a categorical variable in the data.
###Code
loansData['FICO.Range'][0:5] # first five rows of FICO.Range
###Output
_____no_output_____
###Markdown
FICO Range is represented as a categorical variable in the data.We need to change the categorical variable for FICO Range into something numeric so that we can use it in our calculations. As it stands, the values are merely labels, and while they convey meaning to humans, our software can't interpret them as the numbers they really represent.So as a first step, we convert them from categorical variables to strings. So the abstract entity 735-739 becomes a string "735-739".Then we parse the strings so that a range such as "735-739" gets split into two numbers (735,739).Finally we pick a single number to represent this range. We could choose a midpoint but since the ranges are narrow we can get away with choosing one of the endpoints as a representative. Here we arbitrarily pick the lower limit and with some imperious hand waving, assert that it is not going to make a major difference to the outcome.In a further flourish of imperiousness we could declare that "the proof is left as an exercise to the reader". But in reality there is really no such formal "proof" other than trying it out in different ways and convincing oneself. If we wanted to be mathematically conservative we could take the midpoint of the range as a representative and this would satisfy most pointy-haired mathematician bosses that "Data Science Dilbert" might encounter. To summarize - cleaning our data involves:* removing % signs from rates* removing the word ” months" from loan length.* managing outliers - remove such rows in this case* managing NA - remove such rows in this caseThere is one especially high outlier with monthly income > 100K$+. This is likely to be a typo and is removed as a data item. There is also one data item with all N/A - this is also removed. ExerciseActually perform each of the above steps on the dataset i.e.* import the data* remove the '%' suffix from each row* remove the ' months' suffix from each row* remove the outlier rows* remove rows with NASave your code in a reusable manner - these are steps you'll be doing repeatedly. Visual Exploration Now we are going to follow a standard set of steps in exploring data. We apply the following simple visualizations. This is something we will typically also do for other data sets we encounter in other explorations. HistogramA histogram shows us the shape of the distribution of values for a **single** variable.On the x-axis we have the variable under question, divided into buckets or bins. This is a key feature of a histogram.The bin size is adjustable and different bin sizes give different information. A large bin size gives us an idea of the coarser grained structure of the distribution while a smaller bin size will shine light on the finer details of the distribution. In either case we can compare distributions, or quickly identify some key hints that tell use how best to proceed.With the distribution of FICO scores we see the histogram below.
###Code
import matplotlib.pyplot as plt
import pandas as pd
plt.figure()
loansmin = pd.read_csv('../datasets/loanf.csv')
fico = loansmin['FICO.Score']
p = fico.hist()
###Output
_____no_output_____
###Markdown
Why do we look at FICO score? Because we know from domain knowledge that this is the primary determinant of interest rate. The histogram shows us that the distribution is not a normal or gaussian distribution but that there are some other factors that might be affecting or distorting the shape of the distribution away from the bell curve. We want to dig a little deeper. Box PlotNext we take a box plot which allows us to quickly look at the distribution of interest rates based on each FICO score range.
###Code
import matplotlib.pyplot as plt
import pandas as pd
plt.figure()
loansmin = pd.read_csv('../datasets/loanf.csv')
p = loansmin.boxplot('Interest.Rate','FICO.Score')
q = p.set_xticklabels(['640','','','','660','','','','680','','','','700',
'720','','','','740','','','','760','','','','780','','','','800','','','','820','','','','840'])
q0 = p.set_xlabel('FICO Score')
q1 = p.set_ylabel('Interest Rate %')
q2 = p.set_title(' ')
###Output
_____no_output_____
###Markdown
First of all this tells us that there is a general downward trend in interest rate for higher FICO scores.But, given the same range of FICO scores we see a range of interest rates not a single value - so it appears there are other factors determining interest rate, given the same FICO score range. We want to investigate the impact of these other drivers and quantify this impact. What might these be? Let's use a little domain knowledge again. We know interest rate is based on risk to the borrower: the greater the risk, the greater the interest rate charged to compensate for the risk. Another factor that might affect risk is the size of the loan - the larger the amount the greater the risk of non-payment and also the greater the negative impact of actual default.We want to look at multiple factors and how they might affect the interest rate. A great way to look at multiple factors simultaneously is the scatterplot matrix. We are going to use this as the next step in visual exploration. Scatterplot MatrixBut first what is it? The scatterplot matrix is a grid of plots of multiple variables against each other. It shows the relationship of each variable to the others. The ones on the diagonal don't fit this pattern. Why not? What does it mean to find the relationship of something to itself, in this context. Not much, since we are trying to determine the impact of some variable on **another** variable. We're going to look at a scatterplot matrix of the five variables in our data.
###Code
## TRY THIS!
import pandas as pd
loansmin = pd.read_csv('../datasets/loanf.csv')
a = pd.scatter_matrix(loansmin,alpha=0.05,figsize=(10,10), diagonal='hist')
## Click on the line above
## Change 'hist' to 'kde' then hit shift-enter, with the cursor still in this box
## The plot will redraw - it takes a while. While it is recomputing you will see a
## message-box that says 'Kernel Busy' near the top right corner
## You can change the code and hit shift-enter to re-execute the code
## Try changing the (10,10) to (8,8) and (12,12)
## Try changing the alpha value from 0.05 to 0.5
## How does this change in alpha change your ability to interpret the data?
## Feel free to try other variations.
## If at any time you scramble the code and forget the syntax
## a copy of the original code is below. Copy and paste it in place.
## Remember to remove the hashmarks.
## a = pd.scatter_matrix(loansmin, alpha=0.05,figsize=(10,10), diagonal='hist)
###Output
_____no_output_____
###Markdown
In this diagram, the boxes on the diagonal contain histogram plots of the respective variable itself.So if the 3rd variable is Loan Amount then the third row and third column are the Loan Amount column and row. And the third element down the diagonal is the histogram of the Loan Amount. To see how Loan Amount (3rd) affects Interest Rate (1st) then we look for the intersection of the 3rd row and the 1st column. We also notice that we could have looked for the intersection of the 3rd column and 1st row. They have the same plot. The scatterplot matrix plot is visually symmetric about the diagonal.Where there is some significant, useful effect we will see a noticeable trend in the scatterplot at the intersection. Where there is none we will see no noticeable trend. What do the last two sentences mean in practice?Let's compare two plots: the first one at the intersection of 1st row and 2nd column, and the second at the intersection of 1st row 4th column.In the first, FICO score shows an approximate but unmistakeable linear trend.In the second, Monthly Income shows no impact as we move along the x-axis. All the dots are bunched up near one end but show no clear, linear trend like the first one. Similarly there is no obvious variation in the plot for Loan Length while there is a distinct but increasing trend trend also in the plot for Loan Amount.So what does this suggest? It suggests that we should use FICO and Loan Amount in our model as independent variables, while Monthly Income and Loan Length don't seem to be too useful as independent variables. ConclusionSo at the end of this data alchemy exercise we have distilled our variables into two beakers - one has what we believe is relevant - the data nuggets, and the other, the data dross.....the variables that have no visible impact on our dependent variable.We're going to refine our output even further into a model in the next step - the analysis.
###Code
from IPython.core.display import HTML
def css_styling():
styles = open("../styles/custom.css", "r").read()
return HTML(styles)
css_styling()
###Output
_____no_output_____ |
.ipynb_checkpoints/weather_data_2_download-checkpoint.ipynb | ###Markdown
Table of Contents 1 How to obtain weather data from MERRA-2 (Part 2): Download raw data1.1 About this Notebook1.1.1 Other notebooks1.1.2 License1.1.3 Table of contents2 Script Setup3 Download raw data3.1 Input3.1.1 Parameters choices3.1.2 Timeframe3.1.3 Geography coordinates3.2 Subsetting data3.3 Downloading data4 Setting up dataframe(s)4.1 Concatenating/combining individual files4.2 First look at the final data frame structure and format4.3 Saving dataframe How to obtain weather data from MERRA-2 (Part 2): Download raw data About this NotebookThis Jupyter Notebook is part of the [Open Power System Data Project](http://www.open-power-system-data.org) and is written in Python 3. This is **Part 2** of the notebook. It aims to download data from the MERRRA-2 weather dataset.--- Other notebooks**Part 1**: Introduction**Part 3**: Processing raw data and compiling the data package LicenseThis notebook is published under [The MIT License](https://opensource.org/licenses/mit-license.php) license:Copyright (c) 2016 [copyright holders]Permission is hereby granted, free of charge, to any person obtaining a copy of this software and associated documentation files (the "Software"), to deal in the Software without restriction, including without limitation the rights to use, copy, modify, merge, publish, distribute, sublicense, and/or sell copies of the Software, and to permit persons to whom the Software is furnished to do so, subject to the following conditions:The above copyright notice and this permission notice shall be included in all copies or substantial portions of the Software.THE SOFTWARE IS PROVIDED "AS IS", WITHOUT WARRANTY OF ANY KIND, EXPRESS OR IMPLIED, INCLUDING BUT NOT LIMITED TO THE WARRANTIES OF MERCHANTABILITY, FITNESS FOR A PARTICULAR PURPOSE AND NONINFRINGEMENT. IN NO EVENT SHALL THE AUTHORS OR COPYRIGHT HOLDERS BE LIABLE FOR ANY CLAIM, DAMAGES OR OTHER LIABILITY, WHETHER IN AN ACTION OF CONTRACT, TORT OR OTHERWISE, ARISING FROM, OUT OF OR IN CONNECTION WITH THE SOFTWARE OR THE USE OR OTHER DEALINGS IN THE SOFTWARE. Table of contents 1.1 About this Notebook1.1.1 Other notebooks1.1.2 License1.1.3 Table of contents2 Script Setup3 Download raw data3.1 Input3.1.1 Parameters choices3.1.2 Timeframe3.1.3 Geography coordinates3.2 Subsetting data3.3 Downloading data4 Setting up dataframe(s)4.1 Concatenating/combining individual files4.2 First look at the final data frame structure and format4.3 Saving dataframe *** Script Setup
###Code
# importing all necessary Python libraries for this Script
import pandas as pd
import xarray as xr
import numpy as np
import requests
import logging
import os
from datetime import datetime
from calendar import monthrange
from opendap_download.multi_processing_download import DownloadManager
import math
from functools import partial
import re
# Set up a log
logging.basicConfig(level=logging.DEBUG, handlers=[logging.StreamHandler()])
log = logging.getLogger('notebook')
# nb_root_logger = logging.getLogger()
# formatter = logging.Formatter('%(asctime)s - %(name)s - %(levelname)s'\
# '- %(message)s',datefmt='%d %b %Y %H:%M:%S')
# nb_root_logger.handlers[0].setFormatter(formatter)
###Output
_____no_output_____
###Markdown
--- Download raw data Input Parameters choicesDefinition of Input parameters for creating URL with OPeNDAP. Which parameters shall be included in the weather data package?**general parameters**- tavg1_2d_slv_Nx = M2T1NXSLV: - H850: Height at 850 hPa - H500: Height at 500 hPa - H250: Height at 250 hPa - DISPH: Displacement height - ftp://goldsmr4.sci.gsfc.nasa.gov/data/s4pa/MERRA2/M2T1NXSLV.5.12.4/ - via OPenDAP: http://goldsmr4.sci.gsfc.nasa.gov/opendap/MERRA2/M2T1NXSLV.5.12.4/contents.html **Wind Speed** - tavg1_2d_slv_Nx = M2T1NXSLV - U2M: Eastward wind at 2 m above displacement height - U10M: Eastward wind at 10 m above displacement height - U50M: Eastward wind at 50 m above surface - U850: Eastward wind at 850 hPa - U500: Eastward wind at 500 hPa - U250: Eastward wind at 250 hPa - V2M: Northward wind at 2 m above displacement height - V10M: Northward wind at 10 m above displacement height - V50M: Northward wind at 50 m above surface - V850: Northward wind at 850 hPa - V500: Northward wind at 500 hPa - V250: Northward wind at 250 hPa - ftp://goldsmr4.sci.gsfc.nasa.gov/data/s4pa/MERRA2/M2T1NXSLV.5.12.4/ - via OPenDAP: http://goldsmr4.sci.gsfc.nasa.gov/opendap/MERRA2/M2T1NXSLV.5.12.4/contents.html - tavg1_2d_flx_Nx = M2T1NXFLX: - Z0M: Roughness length, momentum - ftp://goldsmr4.sci.gsfc.nasa.gov/data/s4pa/MERRA2/M2T1NXFLX.5.12.4/ - via OPenDAP: http://goldsmr4.sci.gsfc.nasa.gov/opendap/MERRA2/M2T1NXFLX.5.12.4/contents.htmlspäter: **Temperature** (tavg1_2d_slv_Nx = M2T1NXSLV)- TS: Surface skin temperature- T2M Temperature at 2 m above the displacement height- T10M: Temperature at 10 m above the displacement height- T850: Temperature at 850 hPa- T500: Temperature at 500 hPa- T250: Temperature at 250 hPa- ftp://goldsmr4.sci.gsfc.nasa.gov/data/s4pa/MERRA2/M2T1NXSLV.5.12.4/- via OPenDAP: http://goldsmr4.sci.gsfc.nasa.gov/opendap/MERRA2/M2T1NXSLV.5.12.4/contents.htmlspäter: **Solar Radiation** (tavg1_2d_rad_Nx = M2T1NXRAD)- SWGDN: Surface incident shortwave flux (incident = einfallender Strahl)- SWGDNCLR: Surface incident shortwave flux assuming clear sky- SWGNT: Surface net downward shortwave flux- SWGNTCLR: Surface net downward shortwave flux assuming clear sky- SWGNTCLN: Surface net downward shortwave flux assuming clean sky- SWGNTCLRCLN: Surface net downward shortwave flux assuming clear clean sky- SWTDN: top-of-atmosphere incoming shortwave flux- _possibly more_- ftp://goldsmr4.sci.gsfc.nasa.gov/data/s4pa/MERRA2/M2T1NXRAD.5.12.4/- via OPenDAP: http://goldsmr4.sci.gsfc.nasa.gov/opendap/MERRA2/M2T1NXRAD.5.12.4/contents.html Parameter selection These are the possible parameters:* wind* solar radiation* temperatureIf you want to select more than one parameter, separate them with commas. For example: wind, solar radiation, temperature
###Code
# Getting user input
possible_params = ['wind', 'solar radiation', 'temperature']
def test_user_input(para):
'''Tests if too few or too many parameters are provided
and if the parameters are legit.'''
if len(para) < 1 or len(para) > 3:
return False
for p in para:
if p not in possible_params:
return False
else:
pass
return True
# params = input('Please provide atleast one parameter: ')
# params = params.split(',')
# removes whitespace in front of the provided parameters.
# params = [p.lstrip() for p in params]
# if not test_user_input(params):
# raise Exception('Something is wrong with the provided parameters: '+str(params))
# params
# for testing
params = ['wind']
# "translation" of input to list of needed dataset parameters (see above)
# download general parameters always
# download other parameters/datasets only when needed (see list above)
###Output
_____no_output_____
###Markdown
TimeframeDefinition of desired timespan data is needed for. (Optional: daily, monthly, yearly aggregation)
###Code
# TODO: User input
# User input of timespan
download_year = 2014 # Testcase Schlewswig Holstein, 2014, hourly data
download_month = '01'
download_day = '01'
###Output
_____no_output_____
###Markdown
Geography coordinatesDefinition of desired coordinates (rectangular area) data is needed for -> corner coordinates input
###Code
# User input of coordinates
# ------
# Bsp. Schleswig-Holstein (lat/lon in WGS84 Dezimalgrad)
# Nordost-Punkt; 55,036823°N, 11,349297°E
# Südwest-Punkt; 53,366266°N, 7,887088°E
# one point example - not in use
lat = 0
lon = 0
###Output
_____no_output_____
###Markdown
Subsetting data Combining parameter choices above/translation according to OPenDAP guidelines into URL-appendix
###Code
'''
"translation" of input to desired URL parameter
Creation of links to relevant datasets (see above) with chosen parameters
add URL parameter to dataset link
dataset links see above
Beispiel-Schema für Wind-Dataset tavg1_2d_slv_Nx (unterteilt in tägliche Datensätze):
Link für Datensatz für 01.01.1980: http://goldsmr4.sci.gsfc.nasa.gov/opendap/MERRA2/M2T1NXSLV.5.12.4/1980/01/MERRA2_100.tavg1_2d_slv_Nx.19800101.nc4
-> Lädt kompletten Datensatz herunter (alle Parameter, gesamte Welt, 24 Stunden)
-> .html anhängen = manualles subsetting
z.B. time [0:1:23] = jeder Zeitschritt (hier:Stunde) zwischen 0 und 23 Uhr, [0:2:23] = nur jede 2. Stunde etc.
z.B. lat/lon in Schritten angegeben
- latitude (Breitengrad Nord/Süd) 360*0,5°-Schritte
- longitude (Längengrad Ost/West) 575*0,625°-Schritte
'''
def translate_year_to_file_number(year):
"""
The file names basically consist of a number and a meta data string.
The number changes over the year. 1980
until 1991 it is 100, 1992 until 2000 it is 200, 2001 until 2010 it is
300 and from 2011 until now it is 400.
"""
file_number = ''
if year >= 1980 and year < 1992:
file_number = '100'
elif year >= 1992 and year < 2001:
file_number = '200'
elif year >= 2001 and year < 2011:
file_number = '300'
elif year >= 2011:
file_number = '400'
else:
raise Exception('The specified year is out of range.')
return file_number
# # Build the parameter
# url_params = ['U2M','U10M','U50M','V2M','V10M','V50M', 'DISPH']
# # time
# url_params = map(lambda x: x + '[0:1:23]' , url_params)
# # lat
# url_params = map(lambda x: x + '[358:1:360]', url_params)
# # lon
# url_params = map(lambda x: x + '[573:1:575]', url_params)
# url_params = ','.join(url_params)
def generate_url_params(parameter, time_para, lat_para, lon_para):
'''Creates a string containing all the parameters in query form'''
parameter = map(lambda x: x + time_para, parameter)
parameter = map(lambda x: x + lat_para, parameter)
parameter = map(lambda x: x + lon_para, parameter)
return ','.join(parameter)
def generate_download_links(download_years, base_url, dataset_name, url_params):
'''
Generates the links for the download.
download_years: The years you want to download as array.
dataset_name: The name of the data set. For example tavg1_2d_slv_Nx
'''
urls = []
for y in download_years: # only for testing
# build the file_number
y_str = str(y)
file_num = translate_year_to_file_number(download_year)
for m in range(1,13):
# build the month string: for the month 1 - 9 it starts with a leading 0. zfill solves that problem
m_str = str(m).zfill(2)
# monthrange returns the first weekday and the number of days in a month. Also works for leap years.
_, nr_of_days = monthrange(y, m)
for d in range(1,nr_of_days+1):
d_str = str(d).zfill(2)
file_name = 'MERRA2_' + file_num + '.'+dataset_name+'.' + y_str + m_str + d_str + '.nc4'
query = base_url + y_str + '/'+ m_str + '/' + file_name + '.nc4?' + url_params
urls.append(query)
return urls
parameter = generate_url_params(['U2M','U10M','U50M',
'V2M','V10M','V50M','DISPH'],
'[0:1:23]', '[358:1:360]', '[573:1:575]')
BASE_URL = 'http://goldsmr4.sci.gsfc.nasa.gov:80/opendap/MERRA2/M2T1NXSLV.5.12.4/'
generated_URL = generate_download_links([download_year], BASE_URL,
'tavg1_2d_slv_Nx',
parameter)
log.debug('Queries generated: ' + str(len(generated_URL)))
log.debug(generated_URL[0])
###Output
_____no_output_____
###Markdown
Download testing
###Code
from opendap_download.multi_processing_download import DownloadManager
dlm = DownloadManager()
url = generated_URL[0]
log.debug(url)
filename = dlm.get_filename(url)
log.debug(filename)
###Output
_____no_output_____
###Markdown
Downloading data
###Code
from opendap_download.multi_processing_download import DownloadManager
# download data (one file per day and dataset) with links to local directory
# username = "TestUser"
# password = "TestPassword"
# download_manager.set_username_and_password(username, password)
download_manager = DownloadManager()
download_manager.read_credentials_from_yaml('credentials.yaml')
download_manager.download_path = 'download'
download_manager.download_urls = generated_URL
%time download_manager.start_download(4)
###Output
_____no_output_____
###Markdown
Get roughness from different file
###Code
roughness_para = generate_url_params(['Z0M'], '[0:1:23]',
'[358:1:360]', '[573:1:575]')
ROUGHNESS_BASE_URL = 'http://goldsmr4.sci.gsfc.nasa.gov/opendap/MERRA2/M2T1NXFLX.5.12.4/'
roughness_links = generate_download_links([download_year], ROUGHNESS_BASE_URL,
'tavg1_2d_flx_Nx', roughness_para)
download_manager.download_path = 'roughness_download'
download_manager.download_urls = roughness_links
%time download_manager.start_download(4)
###Output
_____no_output_____
###Markdown
--- Get lat and lon dimensions
###Code
lat_lon_dimension_para = 'lat[358:1:360],lon[573:1:575]'
# Creating the download url.
dimension_url = 'http://goldsmr4.sci.gsfc.nasa.gov:80/opendap/MERRA2/M2T1NXSLV.5.12.4/2014/01/MERRA2_400.tavg1_2d_slv_Nx.20140101.nc4.nc4?'
dimension_url = dimension_url + lat_lon_dimension_para
log.debug(dimension_url)
download_manager.download_path = 'dimension_scale'
download_manager.download_urls = [dimension_url]
%time download_manager.start_download()
ds_dim = xr.open_dataset(os.path.join('dimension_scale', DownloadManager.get_filename(dimension_url)))
df_dim = ds_dim.to_dataframe()
# df_dim.reset_index(inplace=True)
# df_dim['lat'].tolist()
lat_array = ds_dim['lat'].data.tolist()
lon_array = ds_dim['lon'].data.tolist()
print(type(lat_array))
print(lat_array)
print(lon_array)
###Output
_____no_output_____
###Markdown
--- Setting up wind dataframe
###Code
def extract_date(data_set):
"""
Extracts the date from the file before merging the datasets
"""
# find a match between . and .nc4 that does not have . .
exp = r'(?<=\.)[^\.]*(?=\.nc4)'
try:
f_name = data_set.attrs['HDF5_GLOBAL.Filename']
res = re.search(exp, f_name).group(0)
y, m, d = res[0:4], res[4:6], res[6:8]
date_str = ('%s-%s-%s' % (y, m, d))
data_set = data_set.assign(date=date_str)
return data_set
except KeyError:
# The last dataset is the one all the other sets will be merged into.
# Therefore, no date can be extracted.
return data_set
file_path = os.path.join('download', '*.nc4')
ds_wind = xr.open_mfdataset(file_path,
concat_dim='date',
preprocess=extract_date)
df_wind = ds_wind.to_dataframe()
df_wind.reset_index(inplace=True)
# TODO: Do not hardcode the value for start_date
start_date = datetime.strptime('2014-01-01', '%Y-%m-%d')
def calculate_datetime(d_frame):
cur_date = datetime.strptime(d_frame['date'], '%Y-%m-%d')
hour = int(d_frame['time'])
delta = abs(cur_date - start_date).days
date_time_value = (delta * 24) + (hour)
return date_time_value
df_wind['date_time'] = df2.apply(calculate_datetime, axis=1)
# Siren windspeed
def calculate_windspeed(d_frame, idx_u, idx_v):
"""
Calculates the windspeed. The returned unit is m/s
"""
um = int(d_frame[idx_u])
vm = int(d_frame[idx_v])
speed = math.sqrt((um ** 2) + (vm ** 2))
return round(speed, 2)
calc_windspeed_2m = partial(calculate_windspeed, idx_u='U2M', idx_v='V2M')
calc_windspeed_10m = partial(calculate_windspeed, idx_u='U10M', idx_v='V10M')
calc_windspeed_50m = partial(calculate_windspeed, idx_u='U50M', idx_v='V50M')
df_wind['v_2m'] = df2.apply(calc_windspeed_2m, axis=1)
df_wind['v_10m']= df2.apply(calc_windspeed_10m, axis=1)
df_wind['v_50m'] = df2.apply(calc_windspeed_50m, axis=1)
df_wind
###Output
_____no_output_____
###Markdown
Setting up Roughness dataframe
###Code
file_path = os.path.join('roughness_download', '*.nc4')
ds_rough = xr.open_mfdataset(file_path, concat_dim='date',
preprocess=extract_date)
df_rough = ds_rough.to_dataframe()
df_rough.reset_index(inplace=True)
df_rough
###Output
_____no_output_____
###Markdown
Concatenating/combining individual filesCombine individual (daily) dataset files to one single dataframe
###Code
df = pd.merge(df_wind, df_rough, on=['date', 'lat', 'lon', 'time'])
###Output
_____no_output_____
###Markdown
--- Structure the dataframe, add and remove columns
###Code
# Calculate height for v_2m and v_10m (2 + DISPH or 10 + DISPH)
df['h_v1'] = df.apply((lambda x:int(x['DISPH']) + 2), axis=1)
df['h_v2'] = df.apply((lambda x:int(x['DISPH']) + 10), axis=1)
df['v_100m'] = np.nan
df.drop('DISPH', axis=1, inplace=True)
df.drop(['time', 'date'], axis=1, inplace=True)
df.drop(['U2M', 'U10M', 'U50M', 'V2M', 'V10M', 'V50M'], axis=1, inplace=True)
###Output
_____no_output_____
###Markdown
--- Renaming the columns
###Code
# TODO: RENAME
# Changing lat lon from 0/1 etc to actual values using the values
# extracted earlier.
df['lat'] = df['lat'].apply(lambda x: lat_array[int(x)])
df['lon'] = df['lon'].apply(lambda x: lon_array[int(x)])
rename_map = {'date_time': 'date/time',
'v_2m': 'v1',
'v_10m': 'v2',
'Z0M': 'z0'
}
df.rename(columns=rename_map, inplace=True)
# Change order of the columns
cols = ['date/time', 'lat', 'lon',
'v1', 'v2', 'v_50m', 'v_100m',
'h_v1', 'h_v2', 'z0']
df = df[cols]
###Output
_____no_output_____
###Markdown
--- First look at the final data frame structure and format
###Code
df.info()
###Output
_____no_output_____
###Markdown
Saving dataframeSave the final dataframe locally
###Code
df.to_csv('weather_script_result.csv', index=False)
###Output
_____no_output_____ |
Day_017_Introduction_of_Feature_Engineering.ipynb | ###Markdown
範例 : (Kaggle)房價預測精簡版 https://www.kaggle.com/c/house-prices-advanced-regression-techniques***- 以下是房價預測的精簡版範例- 使用最小量的特徵工程以及線性回歸模型做預測, 最後輸出可以在Kaggle提交的預測檔
###Code
# 載入套件
import pandas as pd
import numpy as np
from sklearn.preprocessing import LabelEncoder, MinMaxScaler
#讀取訓練與測試資料
data_path = 'data/data2/'
df_train = pd.read_csv(data_path + 'house_train.csv.gz')
df_test = pd.read_csv(data_path + 'house_test.csv.gz')
print(df_train.shape)
# 訓練資料需要 train_X, train_Y / 預測輸出需要 ids(識別每個預測值), test_X
# 在此先抽離出 train_Y 與 ids, 而先將 train_X, test_X 該有的資料合併成 df, 先作特徵工程
train_Y = np.log1p(df_train['SalePrice'])
ids = df_test['Id']
df_train = df_train.drop(['Id', 'SalePrice'] , axis=1)
df_test = df_test.drop(['Id'] , axis=1)
df = pd.concat([df_train,df_test])
df.head()
%%capture
# 特徵工程-簡化版 : 全部空值先補-1, 所有類別欄位先做 LabelEncoder, 然後再與數字欄位做MinMaxScaler
# 這區塊的細節會在後續的課程做解說
LEncoder = LabelEncoder()
MMEncoder = MinMaxScaler()
for c in df.columns:
df[c] = df[c].fillna(-1)
if df[c].dtype == 'object':
df[c] = LEncoder.fit_transform(list(df[c].values))
df[c] = MMEncoder.fit_transform(df[c].values.reshape(-1, 1))
df.head()
# 將前述轉換完畢資料 df , 重新切成 train_X, test_X
train_num = train_Y.shape[0]
train_X = df[:train_num]
test_X = df[train_num:]
# 使用線性迴歸模型
# train_X, train_Y 訓練模型, 並對 test_X 做出預測結果 pred
from sklearn.linear_model import LinearRegression
estimator = LinearRegression()
estimator.fit(train_X, train_Y)
pred = estimator.predict(test_X)
# 將輸出結果 pred 與前面留下的 ID(ids) 合併, 輸出成檔案
# 可以下載並點開 house_baseline.csv 查看結果, 以便了解預測結果的輸出格式
# 本範例所與作業所輸出的 csv 檔, 均可用於本題的 Kaggle 答案上傳, 可以試著上傳來熟悉 Kaggle 的介面操作
pred = np.expm1(pred)
sub = pd.DataFrame({'Id': ids, 'SalePrice': pred})
sub.to_csv('data/data2/house_baseline.csv', index=False)
###Output
_____no_output_____ |
Day_004_HW.ipynb | ###Markdown
作業將下列部分資料片段 sub_train 使用 One Hot encoding, 並觀察轉換前後的欄位數量 (使用 shape) 與欄位名稱 (使用 head) 變化
###Code
sub_train = pd.DataFrame(app_train['WEEKDAY_APPR_PROCESS_START'])
print(sub_train.shape)
sub_train.head()
"""
Your Code Here
"""
print('Unique values of sub_train: %s' % sub_train['WEEKDAY_APPR_PROCESS_START'].unique())
t_sub_train = pd.get_dummies(sub_train)
t_sub_train
###Output
Unique values of sub_train: ['WEDNESDAY' 'MONDAY' 'THURSDAY' 'SUNDAY' 'SATURDAY' 'FRIDAY' 'TUESDAY']
###Markdown
練習時間資料的操作有很多,接下來的馬拉松中我們會介紹常被使用到的操作,參加者不妨先自行想像一下,第一次看到資料,我們一般會想知道什麼訊息? Ex: 如何知道資料的 row 數以及 column 數、有什麼欄位、多少欄位、如何截取部分的資料等等有了對資料的好奇之後,我們又怎麼通過程式碼來達成我們的目的呢? 可參考該[基礎教材](https://bookdata.readthedocs.io/en/latest/base/01_pandas.htmlDataFrame-%E5%85%A5%E9%97%A8)或自行 google [作業目標]- 熟悉更多的 Python 資料操作 [作業重點]- 列出資料的大小 (In[4], Hint : shape)- 列出所有欄位 (In[5], 有多種寫法)- 擷取部分資料 (In[6], Hint : loc 或 iloc)
###Code
!pip uninstall -y kaggle
!pip install --upgrade pip
!pip install kaggle==1.5.6
!kaggle -v
!unzip application_train.csv.zip
!ls ./
import os
import numpy as np
import pandas as pd
# 設定 data_path
dir_data = './'
f_app = os.path.join(dir_data, 'application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)
###Output
Path of read in data: ./application_train.csv
###Markdown
如果沒有想法,可以先嘗試找出剛剛例子中提到的問題的答案 資料的 row 數以及 column 數
###Code
print(app_train.shape)
app_train.info
app_train.describe()
###Output
_____no_output_____
###Markdown
列出所有欄位
###Code
_, cols = app_train.axes
for i in range(len(cols)):
print('col {} : {}'.format(i, cols[i]))
###Output
col 0 : SK_ID_CURR
col 1 : TARGET
col 2 : NAME_CONTRACT_TYPE
col 3 : CODE_GENDER
col 4 : FLAG_OWN_CAR
col 5 : FLAG_OWN_REALTY
col 6 : CNT_CHILDREN
col 7 : AMT_INCOME_TOTAL
col 8 : AMT_CREDIT
col 9 : AMT_ANNUITY
col 10 : AMT_GOODS_PRICE
col 11 : NAME_TYPE_SUITE
col 12 : NAME_INCOME_TYPE
col 13 : NAME_EDUCATION_TYPE
col 14 : NAME_FAMILY_STATUS
col 15 : NAME_HOUSING_TYPE
col 16 : REGION_POPULATION_RELATIVE
col 17 : DAYS_BIRTH
col 18 : DAYS_EMPLOYED
col 19 : DAYS_REGISTRATION
col 20 : DAYS_ID_PUBLISH
col 21 : OWN_CAR_AGE
col 22 : FLAG_MOBIL
col 23 : FLAG_EMP_PHONE
col 24 : FLAG_WORK_PHONE
col 25 : FLAG_CONT_MOBILE
col 26 : FLAG_PHONE
col 27 : FLAG_EMAIL
col 28 : OCCUPATION_TYPE
col 29 : CNT_FAM_MEMBERS
col 30 : REGION_RATING_CLIENT
col 31 : REGION_RATING_CLIENT_W_CITY
col 32 : WEEKDAY_APPR_PROCESS_START
col 33 : HOUR_APPR_PROCESS_START
col 34 : REG_REGION_NOT_LIVE_REGION
col 35 : REG_REGION_NOT_WORK_REGION
col 36 : LIVE_REGION_NOT_WORK_REGION
col 37 : REG_CITY_NOT_LIVE_CITY
col 38 : REG_CITY_NOT_WORK_CITY
col 39 : LIVE_CITY_NOT_WORK_CITY
col 40 : ORGANIZATION_TYPE
col 41 : EXT_SOURCE_1
col 42 : EXT_SOURCE_2
col 43 : EXT_SOURCE_3
col 44 : APARTMENTS_AVG
col 45 : BASEMENTAREA_AVG
col 46 : YEARS_BEGINEXPLUATATION_AVG
col 47 : YEARS_BUILD_AVG
col 48 : COMMONAREA_AVG
col 49 : ELEVATORS_AVG
col 50 : ENTRANCES_AVG
col 51 : FLOORSMAX_AVG
col 52 : FLOORSMIN_AVG
col 53 : LANDAREA_AVG
col 54 : LIVINGAPARTMENTS_AVG
col 55 : LIVINGAREA_AVG
col 56 : NONLIVINGAPARTMENTS_AVG
col 57 : NONLIVINGAREA_AVG
col 58 : APARTMENTS_MODE
col 59 : BASEMENTAREA_MODE
col 60 : YEARS_BEGINEXPLUATATION_MODE
col 61 : YEARS_BUILD_MODE
col 62 : COMMONAREA_MODE
col 63 : ELEVATORS_MODE
col 64 : ENTRANCES_MODE
col 65 : FLOORSMAX_MODE
col 66 : FLOORSMIN_MODE
col 67 : LANDAREA_MODE
col 68 : LIVINGAPARTMENTS_MODE
col 69 : LIVINGAREA_MODE
col 70 : NONLIVINGAPARTMENTS_MODE
col 71 : NONLIVINGAREA_MODE
col 72 : APARTMENTS_MEDI
col 73 : BASEMENTAREA_MEDI
col 74 : YEARS_BEGINEXPLUATATION_MEDI
col 75 : YEARS_BUILD_MEDI
col 76 : COMMONAREA_MEDI
col 77 : ELEVATORS_MEDI
col 78 : ENTRANCES_MEDI
col 79 : FLOORSMAX_MEDI
col 80 : FLOORSMIN_MEDI
col 81 : LANDAREA_MEDI
col 82 : LIVINGAPARTMENTS_MEDI
col 83 : LIVINGAREA_MEDI
col 84 : NONLIVINGAPARTMENTS_MEDI
col 85 : NONLIVINGAREA_MEDI
col 86 : FONDKAPREMONT_MODE
col 87 : HOUSETYPE_MODE
col 88 : TOTALAREA_MODE
col 89 : WALLSMATERIAL_MODE
col 90 : EMERGENCYSTATE_MODE
col 91 : OBS_30_CNT_SOCIAL_CIRCLE
col 92 : DEF_30_CNT_SOCIAL_CIRCLE
col 93 : OBS_60_CNT_SOCIAL_CIRCLE
col 94 : DEF_60_CNT_SOCIAL_CIRCLE
col 95 : DAYS_LAST_PHONE_CHANGE
col 96 : FLAG_DOCUMENT_2
col 97 : FLAG_DOCUMENT_3
col 98 : FLAG_DOCUMENT_4
col 99 : FLAG_DOCUMENT_5
col 100 : FLAG_DOCUMENT_6
col 101 : FLAG_DOCUMENT_7
col 102 : FLAG_DOCUMENT_8
col 103 : FLAG_DOCUMENT_9
col 104 : FLAG_DOCUMENT_10
col 105 : FLAG_DOCUMENT_11
col 106 : FLAG_DOCUMENT_12
col 107 : FLAG_DOCUMENT_13
col 108 : FLAG_DOCUMENT_14
col 109 : FLAG_DOCUMENT_15
col 110 : FLAG_DOCUMENT_16
col 111 : FLAG_DOCUMENT_17
col 112 : FLAG_DOCUMENT_18
col 113 : FLAG_DOCUMENT_19
col 114 : FLAG_DOCUMENT_20
col 115 : FLAG_DOCUMENT_21
col 116 : AMT_REQ_CREDIT_BUREAU_HOUR
col 117 : AMT_REQ_CREDIT_BUREAU_DAY
col 118 : AMT_REQ_CREDIT_BUREAU_WEEK
col 119 : AMT_REQ_CREDIT_BUREAU_MON
col 120 : AMT_REQ_CREDIT_BUREAU_QRT
col 121 : AMT_REQ_CREDIT_BUREAU_YEAR
###Markdown
截取部分資料
###Code
# investigate TARGET, AMT_INCOME_TOTAL, AMT_CREDIT, DAYS_EMPLOYED
# partial = app_train[['TARGET', 'AMT_INCOME_TOTAL', 'AMT_CREDIT', 'DAYS_EMPLOYED']]
partial = app_train.iloc[:,[1,7,8,18]]
partial
partial.describe()
###Output
_____no_output_____
###Markdown
還有各種數之不盡的資料操作,重點還是取決於實務中遇到的狀況和你想問的問題,在馬拉松中我們也會陸續提到更多例子
###Code
head_10 = app_train.head(10) #first 10 rows
head_10
###Output
_____no_output_____
###Markdown
作業將下列部分資料片段 sub_train 使用 One Hot encoding, 並觀察轉換前後的欄位數量 (使用 shape) 與欄位名稱 (使用 head) 變化
###Code
sub_train = pd.DataFrame(app_train['WEEKDAY_APPR_PROCESS_START'])
print(sub_train.shape)
sub_train.head()
trans_data = pd.get_dummies(data=sub_train)
print('shape: {}'.format(trans_data.shape))
trans_data.head()
###Output
shape: (307511, 7)
###Markdown
練習時間資料的操作有很多,接下來的馬拉松中我們會介紹常被使用到的操作,參加者不妨先自行想像一下,第一次看到資料,我們一般會想知道什麼訊息? Ex: 如何知道資料的 row 數以及 column 數、有什麼欄位、多少欄位、如何截取部分的資料等等有了對資料的好奇之後,我們又怎麼通過程式碼來達成我們的目的呢? 可參考該[基礎教材](https://bookdata.readthedocs.io/en/latest/base/01_pandas.htmlDataFrame-%E5%85%A5%E9%97%A8)或自行 google [作業目標]- 熟悉更多的 Python 資料操作 [作業重點]- 列出資料的大小 (In[4], Hint : shape)- 列出所有欄位 (In[5], 有多種寫法)- 擷取部分資料 (In[6], Hint : loc 或 iloc)
###Code
import os
import numpy as np
import pandas as pd
# 設定 data_path
dir_data = './data/'
f_app = os.path.join(dir_data, 'application_train.csv')
print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app)
###Output
_____no_output_____
###Markdown
如果沒有想法,可以先嘗試找出剛剛例子中提到的問題的答案 資料的 row 數以及 column 數
###Code
print(app_train.shape)
###Output
_____no_output_____
###Markdown
列出所有欄位
###Code
app_train.columns
###Output
_____no_output_____
###Markdown
截取部分資料
###Code
app_train.iloc[:10, 0:5]
###Output
_____no_output_____
###Markdown
練習時間資料的操作有很多,接下來的馬拉松中我們會介紹常被使用到的操作,參加者不妨先自行想像一下,第一次看到資料,我們一般會想知道什麼訊息? Ex: 如何知道資料的 row 數以及 column 數、有什麼欄位、多少欄位、如何截取部分的資料等等有了對資料的好奇之後,我們又怎麼通過程式碼來達成我們的目的呢? 可參考該[基礎教材](https://bookdata.readthedocs.io/en/latest/base/01_pandas.htmlDataFrame-%E5%85%A5%E9%97%A8)或自行 google [作業目標]- 熟悉更多的 Python 資料操作 [作業重點]- 列出資料的大小 (In[4], Hint : shape)- 列出所有欄位 (In[5], 有多種寫法)- 擷取部分資料 (In[6], Hint : loc 或 iloc)
###Code
import os
import numpy as np
import pandas as pd
# 設定 data_path
dir_data = './data/'
f_app = os.path.join('HomeCredit_columns_description.csv')
# print('Path of read in data: %s' % (f_app))
app_train = pd.read_csv(f_app, encoding = 'ISO-8859-1')
###Output
_____no_output_____
###Markdown
如果沒有想法,可以先嘗試找出剛剛例子中提到的問題的答案 資料的 row 數以及 column 數
###Code
app_train.info()
###Output
<class 'pandas.core.frame.DataFrame'>
RangeIndex: 219 entries, 0 to 218
Data columns (total 5 columns):
# Column Non-Null Count Dtype
--- ------ -------------- -----
0 Unnamed: 0 219 non-null int64
1 Table 219 non-null object
2 Row 219 non-null object
3 Description 219 non-null object
4 Special 86 non-null object
dtypes: int64(1), object(4)
memory usage: 8.7+ KB
###Markdown
列出所有欄位
###Code
for x in app_train.columns:
print(x)
###Output
Unnamed: 0
Table
Row
Description
Special
###Markdown
截取部分資料
###Code
app_train.head()
###Output
_____no_output_____
###Markdown
還有各種數之不盡的資料操作,重點還是取決於實務中遇到的狀況和你想問的問題,在馬拉松中我們也會陸續提到更多例子
###Code
###Output
_____no_output_____ |
sopron_2018_notebooks/pySUMMA_Demo_Example_Fig9_Using_TestCase_in_Local.ipynb | ###Markdown
Modeling the Impact of Lateral Flow Parameterizations on Basin Wide Runoff in the Reynolds Mountain East catchment using pySUMMA 1. Introduction One part of the Clark et al. (2015) study explored the impact of the lateral flux of liquid water on total evapotranspiration (ET) using a SUMMA model for the Reynolds Mountain East catchment. This study looked at the sensitivity of the different model representation of the lateral flux of liquid water, which determines the availability of soil water.In this Jupyter Notebook, the pySUMMA library is used to reproduce this analysis. First, the latertal flux from the soil profile are described. Next, the Methods section describes how the pySUMMA can be used to create three different lateral model representation of the Reynolds Mountain East catchment model, 1d Richards', lumped topmodel, and distributed topmodel. The Results section shows how to use pySUMMA and the Pandas library to reproduce Figure 8(right) from Clark et al. (2015).Collectively, this Jupyter Notebook serves as an example of how hydrologic modeling can be conducted directly within a Jupyter Notebook by leveraging the pySUMMA library. | Method | 1dRichards' | Lumped Topmodel | Distributed Topmodel | |---------------------------------------------|-------------|-------------------|------------------------| | groundwater parameterization | noXplict | qTopmodl | qTopmodl | | hydraulic conductivity profile | constant | pow_prof | pow_prof | |lower boundary condition for soil hydrology | drainage | zeroFlux | zeroFlux | |thermal conductivity representation for soil | mixConstit | funcSoilWet | funcSoilWet | 2. Background The Transpiration from soil layers available in SUMMA
###Code
#import libraries to display equations within the notebook
from IPython.display import display, Math, Latex
###Output
_____no_output_____
###Markdown
Latertal flux from the soil profile The soil columns can be hydrologically connected, such that the lateral flux from upslope soil columns is the inflow to downslope soil columns, or hydrologically-disconnected (using one or many soil columns), in which case the lateral flux of water from soil columns is assumed to flow directly into the river network. The continuity equation for sub-surface storage (i.e., below the water table) can be written for a given model element as [Wigmosta et al., 1994]\begin{equation*}Q_{dr} = \frac{dz_{wt}}{dt} = \frac{Q_{out}-Q_{in}}{A} - q_{rchg}\end{equation*}$Q_{dr} = (\theta_{sat}^{soil} - \theta_{fc}^{soil}) $ : “drainable” porosity, $\theta_{fc}^{soil}$ : the field capacity of soil, $z_{wt}$ $(m)$ : the depth to the water table$Q_{out}$ and $Q_{in}$ $(m^{3}/s)$: the lateral inflow and outflow, $q_{rchg}$ $(m/s)$ : the vertical recharge rate, $A$ $(m^2)$ : the element area Storage-based implementation to represent lateral flow between soil columns The “drainable” water storage and the maximum drainable water storage can be given as\begin{equation*}W_{dr}^{soil} = \int_{z_{crit}}^{z_{soil}}\ [\theta_{liq}^{soil} (z) - \theta_{fc}^{soil} ] \mathrm{d}z, \ W_{dr,max}^{soil} = \phi_{dr}z_{soil}\end{equation*}$\theta_{liq}^{soil} (z)$ : the volumetric liquid water content at soil depth z, $z_{crit}$ : the lowest point in the soil profile where $\theta_{liq}^{soil}$ < $\theta_{fc}^{soil}$ The total lateral outflow \begin{equation*}Q_{out} = x_{len}tan(\beta) \frac{K_{sat}^{0} W_{dr,max}^{soil}}{\phi_{dr}n_{sf}}[\frac{W_{dr}^{soil}}{W_{dr,max}^{soil}}]^{n_{sf}}\end{equation*}$\beta$ : the gradient in the land surface, used to approximate the water table gradient The total lateral flux \begin{equation*}q_{base}^{soil} = \frac{Q_{out}-Q_{in}}{A}\end{equation*}The total lateral flux $q_{base}^{soil}$ can then be apportioned to individual soil layers, obtained after spatial discretization described in Clark et al. [2015b], to provide the lateral flow sink term \begin{equation*}(S_{lf})_{j} = (w_{tv})_{j} q_{base}^{soil}\end{equation*}$(w_{tv})_{j}$ : the ratio of the transmissivity of the $j$-th layer to the total transmissivity The above descriptions are taken from the lateral flux from the soil profile section(3.2.3.5) within the manual Structure for Unifying Multiple Modeling Alternatives (SUMMA), Version 1.0: Technical Description (April, 2015). 3. Methods 1) Study Area The Reynolds Mountain East catchment is located in southwestern Idaho as shown in the figure below.
###Code
from ipyleaflet import Map, GeoJSON
import json
m = Map(center=[43.06745, -116.75489], zoom=15)
with open('reynolds_geojson_latlon.geojson') as f:
data = json.load(f)
g = GeoJSON(data=data)
m.add_layer(g)
m
###Output
_____no_output_____
###Markdown
2) Create pySUMMA Simulation Object of 1d Richards method and Run SUMMA Model
###Code
from pysumma.Simulation import Simulation
from pysumma.Plotting import Plotting
# create a pySUMMA simulation object using the SUMMA 'file manager' input file
S_1dRichards = Simulation('/glade/u/home/ydchoi/summaTestCases_2.x/settings/wrrPaperTestCases/figure09/summa_fileManager_1dRichards.txt')
# set SUMMA executable file
S_1dRichards.executable = "/glade/u/home/ydchoi/summa/bin/summa.exe"
# check the simulation start and finish times
S_1dRichards.decision_obj.simulStart.value, S_1dRichards.decision_obj.simulFinsh.value
# check option and selected method of (11) choice of groundwater parameterization in Decision file
S_1dRichards.decision_obj.groundwatr.options, S_1dRichards.decision_obj.groundwatr.value
# check option and selected method of (12) choice of hydraulic conductivity profile in Decision file
S_1dRichards.decision_obj.hc_profile.options, S_1dRichards.decision_obj.hc_profile.value
# check option and selected method of (16) type of lower boundary condition for soil hydrology in Decision file
S_1dRichards.decision_obj.bcLowrSoiH.options, S_1dRichards.decision_obj.bcLowrSoiH.value
# check option and selected method of (27) choice of thermal conductivity representation for soil in Decision file
S_1dRichards.decision_obj.thCondSoil.options, S_1dRichards.decision_obj.thCondSoil.value
# check Basin variable meta data in file manager file
S_1dRichards.meta_basinvar.filename
# check Basin Parameter info data in file manager file
S_1dRichards.basin_par.filename
# check Forcing list data in file manager file
S_1dRichards.forcing_list.filename
# check Initial condition data in file manager file
S_1dRichards.initial_cond.filename
###Output
_____no_output_____
###Markdown
If you have output file, you don't need to run SUMMA. Move next
###Code
# run the model giving the output the suffix "1dRichards_local" and get "results_1dRichards" object
results_1dRichards, output_R = S_1dRichards.execute(run_suffix="1dRichards_local", run_option = 'local')
R = Plotting(output_R)
results_1dRichards = R.open_netcdf()
###Output
_____no_output_____
###Markdown
4) Create pySUMMA Simulation Object of Lumped Topmodel method and Run SUMMA Model
###Code
# create a pySUMMA simulation object using the SUMMA 'file manager' input file
S_lumpedTopmodel = Simulation('/glade/u/home/ydchoi/summaTestCases_2.x/settings/wrrPaperTestCases/figure09/summa_fileManager_lumpedTopmodel.txt')
# set SUMMA executable file
S_lumpedTopmodel.executable = "/glade/u/home/ydchoi/summa/bin/summa.exe"
# check the simulation start and finish times
S_lumpedTopmodel.decision_obj.simulStart.value, S_lumpedTopmodel.decision_obj.simulFinsh.value
# check option and selected method of (11) choice of groundwater parameterization in Decision file
S_lumpedTopmodel.decision_obj.groundwatr.options, S_lumpedTopmodel.decision_obj.groundwatr.value
# check option and selected method of (12) choice of hydraulic conductivity profile in Decision file
S_lumpedTopmodel.decision_obj.hc_profile.options, S_lumpedTopmodel.decision_obj.hc_profile.value
# check option and selected method of (16) type of lower boundary condition for soil hydrology in Decision file
S_lumpedTopmodel.decision_obj.bcLowrSoiH.options, S_lumpedTopmodel.decision_obj.bcLowrSoiH.value
# check option and selected method of (27) choice of thermal conductivity representation for soil in Decision file
S_lumpedTopmodel.decision_obj.thCondSoil.options, S_lumpedTopmodel.decision_obj.thCondSoil.value
# check Basin variable meta data in file manager file
S_lumpedTopmodel.meta_basinvar.filename
# check Basin Parameter info data in file manager file
S_lumpedTopmodel.basin_par.filename
# check Forcing list data in file manager file
S_lumpedTopmodel.forcing_list.filename
# check Initial condition data in file manager file
S_lumpedTopmodel.initial_cond.filename
###Output
_____no_output_____
###Markdown
If you have output file, you don't need to run SUMMA. Move next
###Code
# run the model giving the output the suffix "lumpedTopmodel_local" and get "results_lumpedTopmodel" object
results_lumpedTopmodel, output_LT = S_lumpedTopmodel.execute(run_suffix="lumpedTopmodel_local", run_option = 'local')
L = Plotting(output_LT)
results_lumpedTopmodel = L.open_netcdf()
###Output
_____no_output_____
###Markdown
5) Create pySUMMA Simulation Object of Distributed Topmodel method and Run SUMMA Model
###Code
# create a pySUMMA simulation object using the SUMMA 'file manager' input file
S_distributedTopmodel = Simulation('/glade/u/home/ydchoi/summaTestCases_2.x/settings/wrrPaperTestCases/figure09/summa_fileManager_distributedTopmodel.txt')
# set SUMMA executable file
S_distributedTopmodel.executable = "/glade/u/home/ydchoi/summa/bin/summa.exe"
# check the simulation start and finish times
S_distributedTopmodel.decision_obj.simulStart.value, S_distributedTopmodel.decision_obj.simulFinsh.value
# check option and selected method of (11) choice of groundwater parameterization in Decision file
S_distributedTopmodel.decision_obj.groundwatr.options, S_distributedTopmodel.decision_obj.groundwatr.value
# check option and selected method of (12) choice of hydraulic conductivity profile in Decision file
S_distributedTopmodel.decision_obj.hc_profile.options, S_distributedTopmodel.decision_obj.hc_profile.value
# check option and selected method of (16) type of lower boundary condition for soil hydrology in Decision file
S_distributedTopmodel.decision_obj.bcLowrSoiH.options, S_distributedTopmodel.decision_obj.bcLowrSoiH.value
# check option and selected method of (27) choice of thermal conductivity representation for soil in Decision file
S_distributedTopmodel.decision_obj.thCondSoil.options, S_distributedTopmodel.decision_obj.thCondSoil.value
# check Basin variable meta data in file manager file
S_distributedTopmodel.meta_basinvar.filename
# check Basin Parameter info data in file manager file
S_distributedTopmodel.basin_par.filename
# check Forcing list data in file manager file
S_distributedTopmodel.forcing_list.filename
# check Initial condition data in file manager file
S_distributedTopmodel.initial_cond.filename
###Output
_____no_output_____
###Markdown
If you have output file, you don't need to run SUMMA. Move next
###Code
# run the model giving the output the suffix "distributedTopmodel_docker_develop" and get "results_distributedTopmodel" object
results_distributedTopmodel, output_DT = S_distributedTopmodel.execute(run_suffix="distributedTopmodel_local", run_option = 'local')
D = Plotting(output_DT)
results_distributedTopmodel = D.open_netcdf()
###Output
_____no_output_____
###Markdown
4. Results Recreate the Figure 9 plot from Clark et al., 2015: The Basin-Wide Runoff for the model representation of the lateral flux of liquid water
###Code
from pysumma.Plotting import Plotting
from jupyterthemes import jtplot
import matplotlib.pyplot as plt
import pandas as pd
###Output
_____no_output_____
###Markdown
4.1) Create function to calculate daily runoff from SUMMA output for the period 1 oct 2002 to 1 oct 2008
###Code
def calc_total_runoff(runoff_output_df):
# average Instance Runoff variable is runoff
runoff = runoff_output_df['averageInstantRunoff']
dates = runoff.coords['time'].data
# create data value(Y-axis) attribute from ouput netcdf
data_values = runoff.data*86400000
# create two dimensional tabular data structure
total_runoff_df = pd.DataFrame(data_values, index=dates)
# round time to nearest hour (ex. 2006-10-01T00:59:59.99 -> 2006-10-01T01:00:00)
total_runoff_df.index = total_runoff_df.index.round("D")
# set the time period to display plot
total_runoff_df = total_runoff_df.loc["2002-10-01":"2008-10-01"]
# resample data by the average value hourly
total_runoff_by_daily = total_runoff_df.resample("D").mean()
return total_runoff_by_daily
###Output
_____no_output_____
###Markdown
4.2) Get daily runoff
###Code
# get daily runoff output using1d Richards method(1d Richards method appied 1 hru)
daily_1dRichards = calc_total_runoff(results_1dRichards)
# get daily runoff output using lumped Topmodel method (lumped Topmodel method appied 1 hru)
daily_lumpedTopmodel = calc_total_runoff(results_lumpedTopmodel)
# get daily runoff output using lumped Topmodel method (lumped Topmodel method appied 6 hru)
daily_distributedTopmodel = calc_total_runoff(results_distributedTopmodel)
###Output
_____no_output_____
###Markdown
4.3) Combine the different lateral flux parameterizations on simulations of runoff into a single Pandas Dataframe
###Code
# Combine the different lateral flux parameterizations on simulations of runoff
Runoff_Combine = pd.concat([daily_1dRichards, daily_lumpedTopmodel, daily_distributedTopmodel], axis=1)
# add label
Runoff_Combine.columns = ["Baseflow = 1D Richards'", 'Baseflow = Topmodel(lumped)', 'Baseflow = Topmodel(distributed)']
Runoff_Combine.head()
###Output
_____no_output_____
###Markdown
4.4) Add obervation data in streamflow station in Reynolds Mountain East to the plot
###Code
# create pySUMMA Plotting Object
Val_Streamflow = Plotting('/glade/u/home/ydchoi/summaTestCases_2.x/testCases_data/validationData/ReynoldsCreek_valData.nc')
# read Runoff data(Q) from validation netcdf file
obs_streamflow = Val_Streamflow.ds['Q']
# create dates(X-axis) attribute from validation netcdf file
dates = obs_streamflow.coords['time'].data
# Change unit from cm/hr to mm/day
data_values = obs_streamflow.data*24*10
# create two dimensional tabular data structure
df = pd.DataFrame(data_values, index=dates)
# set the time period to display plot
df_filt = df.loc["2002-10-01":"2008-10-01"]
# select label
df_filt.columns = ['Observations']
# resample data by the average daily from hourly
obs_streamflow_daily = df_filt.resample("D").mean()
# set x index accoording to the change of time step
obs_date = obs_streamflow_daily.index
###Output
_____no_output_____
###Markdown
4.5) Plotting output of the Parameterization of the Lateral Flux of Liquid Water and observation data
###Code
graph_runoff = pd.concat([Runoff_Combine, obs_streamflow_daily], 1)
graph_runoff.plot()
jtplot.figsize(x=20, y=10)
###Output
_____no_output_____ |
DAY 101 ~ 200/DAY200_[leetCode] Reformat Date (Python).ipynb | ###Markdown
2020년 9월 6일 일요일 leetCode - Reformat Date (Python) 문제 : https://leetcode.com/problems/reformat-date/ 블로그 : https://somjang.tistory.com/entry/leetCode-1507-Reformat-Date-Python 첫번째 시도
###Code
class Solution:
def reformatDate(self, date: str) -> str:
month_dict = {"Jan":"01", "Feb":"02", "Mar":"03", "Apr":"04", "May":"05", "Jun":"06", "Jul":"07", "Aug":"08", "Sep":"09", "Oct":"10", "Nov":"11", "Dec":"12"}
split_date = date.split(" ")
day = split_date[0][:-2]
if len(day) == 1:
day = "0" + day
answer = "{}-{}-{}".format(split_date[2], month_dict[split_date[1]], day)
return answer
###Output
_____no_output_____ |
Model_Attempts/Fourteenth_Try_Model.ipynb | ###Markdown
King County Dataset Linear Regression Model 14 Adjustments for this model: Start with getting rid of 'id', Then deal with the NaN's in 'view', 'yr_renovated', 'waterfront', and 'sqft_basement' Change "?" in 'sqft_basement' Take care of outlier in bedrooms Deal with the date feature Bin: 'view', 'grade', 'sqft_basement', 'yr_renovated', 'waterfront', 'condition' Lot Transform: 'sqft_above', 'sqft_living','sqft_lot', 'sqft_living15', 'sqft_lot15' are skewed right. Max/Min: Standardization: 'sqft_above', 'sqft_living','sqft_lot','sqft_living15', 'sqft_lot15'
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
%matplotlib inline
data = pd.read_csv("kc_house_data.csv")
data.head()
data.describe()
# Change "?" in 'sqft_basement' to '0';
data.sqft_basement = data.sqft_basement.replace(to_replace = '?', value = '0')
# Account for missing data in 'waterfront', 'view', 'yr_renovated';
data.waterfront.fillna(value=data.waterfront.median(), inplace = True)
data.view.fillna(value=data.view.median(), inplace = True)
data.yr_renovated.fillna(value=data.yr_renovated.median(), inplace = True)
data.sqft_basement.fillna(value=data.sqft_basement.median(), inplace = True)
# Change outlier '33' to '3' in 'bedrooms';
data.at[15856,'bedrooms'] = 3
#Old version. Still not working!
####import datetime as dt
####data['date'] = pd.to_datetime(data.date)
# Change 'date' feature to float;
import datetime as dt
#Run this code first and then change it!
data["date"] = pd.to_datetime(data["date"], format = "%m/%d/%Y")
# I want day first, but it won't work this way.
#data["date"] = pd.to_datetime(data["date"], format = "%d/%m/%Y")
plt.hist(data.date)
# Change 'sqft_basement' from an object to a float:
data['sqft_basement'] = data['sqft_basement'].astype(float)
data = data.drop(["id"], axis=1)
data.bathrooms.hist()
data.bathrooms.describe()
data.loc[data["bathrooms"] == 8]
new_figure = plt.figure(figsize=(8,8))
ax1 = plt.subplot(2, 2, 1)
data.sqft_above.hist(ax=ax1)
ax1.set_title("sqft_above")
ax2 = plt.subplot(2, 2, 2)
data.sqft_living.hist(ax=ax2)
ax2.set_title('sqft_living')
ax3 = plt.subplot(2, 2, 3)
data.sqft_living15.hist(ax=ax3)
ax3.set_title("sqft_living15")
ax4 = plt.subplot(2, 2, 4)
data.sqft_lot15.hist(ax=ax4)
ax4.set_title('sqft_lot15')
# Drop 'bathrooms' that are greater than '7'
data = data[data.bathrooms != 8]
data = data[data.bathrooms != 7.75]
data = data[data.bathrooms != 7.5]
data = data[data.bathrooms < 7]
data.bathrooms.std()*4 + data.bathrooms.mean()
len(data.loc[data["bathrooms"] > 6])
data.bedrooms.hist()
data.sqft_living.hist()
data.sqft_living.mean()
data.sqft_living.std()
data.sqft_living.std()*3
data.sqft_living.mean() + data.sqft_living.std()*3
len(data.loc[data["sqft_living"] > 4810.99])
data = data[data.sqft_living < 4810.99]
data.sqft_lot.hist()
data.sqft_lot.mean()
# Three standard deviations is about a half acre
data.sqft_lot.std()*3
data.sqft_lot.mean()+data.sqft_lot.std()*4
# Number of homes that have more than a 1 acre lot or 43560 sqft.
len(data.loc[data["sqft_lot"] > 178066])
data = data[data.sqft_lot < 178066]
data.sqft_lot.hist()
data.sqft_above.hist()
data.sqft_above.describe()
data.sqft_above.std()*3
data.sqft_above.mean()+data.sqft_above.std()*4
len(data.loc[data["sqft_above"] > 4938])
data = data[data.sqft_above < 4938]
data.sqft_above.hist()
data.yr_built.unique()
# Left skewed.
data.yr_built.hist()
data.yr_built.mean()-data.yr_built.std()*3
# Left skewed? Do I need to normalize this?
data.yr_renovated.describe()
data.yr_renovated.hist()
data.yr_renovated.describe()
data.sqft_living15.hist()
data.sqft_living15.mean()+data.sqft_living15.std()*3
len(data.loc[data["sqft_living15"] > 3944.71])
# Let's get rid of the outliers
data = data[data.sqft_living15 < 3944.71]
data.sqft_living15.hist()
data.sqft_lot15.std()*3
data.sqft_lot15.mean()+data.sqft_lot15.std()*3
len(data.loc[data["sqft_lot15"] >60726.64])
# Let's get rid of the outliers
data = data[data.sqft_lot15 < 60726.64]
data.sqft_lot15.hist()
new_figure = plt.figure(figsize=(8,8))
ax1 = plt.subplot(2, 2, 1)
data.sqft_above.hist(ax=ax1)
ax1.set_title("sqft_above")
ax2 = plt.subplot(2, 2, 2)
data.sqft_living.hist(ax=ax2)
ax2.set_title('sqft_living')
ax3 = plt.subplot(2, 2, 3)
data.sqft_living15.hist(ax=ax3)
ax3.set_title("sqft_living15")
ax4 = plt.subplot(2, 2, 4)
data.sqft_lot15.hist(ax=ax4)
ax4.set_title('sqft_lot15')
data.describe()
import seaborn as sns
sns.regplot(x="price", y="yr_renovated", data=data)
#import statsmodels.api as sm
#import numpy as np
#import matplotlib.pyplot as plt
#Y = pd.DataFrame(data, columns = ['price'])
#X = data.drop(["price"], axis=1)
#results = sm.OLS(Y, X)
#results
#plt.scatter(X,Y)
#X_plot = np.linspace(0,1,100)
#plt.plot(X_plot, X_plot*results.params[0] + results.params[1])
#plt.show()
# 20 x 20 grid of 400 plots! Takes a while
#sns.pairplot(data)
#import seaborn as sns
#for col in data.columns:
# sns.regplot(x="price", y=col, data=data)
#import seaborn as sns
#for col in data.columns:
# sns.lmplot(x=data['price'], y=col, data=data, fit_reg=True)
#import seaborn as sns
#sns.regplot(x="price", y="bedrooms", data=data)
sns.lmplot( x='bedrooms', y='price', data=data, fit_reg=True)
sns.lmplot( x='bedrooms', y='price', data=data, fit_reg=True)
sns.lmplot( x='sqft_living', y='price', data=data, fit_reg=True)
sns.lmplot( x='grade', y='price', data=data, fit_reg=True)
sns.lmplot( x='condition', y='price', data=data, fit_reg=True)
sns.lmplot( x='yr_built', y='price', data=data, fit_reg=True)
sns.lmplot( x='zipcode', y='price', data=data, fit_reg=True)
#for xcol in data.columns:
# sns.lmplot( x=xcol, y='price', data=data, fit_reg=True)
# Create bins for 'yr_renovated' based on the values observed. 4 values will result in 3 bins
bins_A = [0, 1900, 1990, 2000, 2008, 2015]
bins_yr_renovated = pd.cut(data['yr_renovated'], bins_A)
#bins_yr_renovated = bins_yr_renovated.as_unordered()
yr_renovated_dummy = pd.get_dummies(bins_yr_renovated, prefix="yr_ren")
data = data.drop(["yr_renovated"], axis=1)
data = pd.concat([data, yr_renovated_dummy], axis=1)
# Create bins for 'sqft_basement' based on the values observed. 3 values will result in 2 bins
bins_B = [0, 100, 5000]
bins_sqft_basement = pd.cut(data['sqft_basement'], bins_B)
sqft_basement_dummy = pd.get_dummies(bins_sqft_basement, prefix="sqft_base", drop_first=True)
data = data.drop(["sqft_basement"], axis=1)
data = pd.concat([data, sqft_basement_dummy], axis=1)
# Create bins for 'view' based on the values observed. 3 values will result in 2 bins
bins_C = [0, 2, 4]
bins_view = pd.cut(data['view'], bins_C)
view_dummy = pd.get_dummies(bins_view, prefix="new_view", drop_first=True)
data = data.drop(["view"], axis=1)
data = pd.concat([data, view_dummy], axis=1)
# Create bins for 'grade' based on the values observed. 4 values will result in 3 bins
bins_D = [0, 5, 7, 13]
bins_grade = pd.cut(data['grade'], bins_D)
grade_dummy = pd.get_dummies(bins_grade, prefix="new_grade", drop_first=True)
data = data.drop(["grade"], axis=1)
data = pd.concat([data, grade_dummy], axis=1)
# Create bins for 'waterfront' based on the values observed. 3 values will result in 2 bins
bins_E = [0, 0.5, 1]
bins_waterfront = pd.cut(data['waterfront'], bins_E)
waterfront_dummy = pd.get_dummies(bins_waterfront, prefix="new_waterfront", drop_first=True)
data = data.drop(["waterfront"], axis=1)
data = pd.concat([data, waterfront_dummy], axis=1)
# Create bins for 'condition' based on the values observed. 4 values will result in 3 bins
bins_G = [0, 3, 4, 5]
bins_condition = pd.cut(data['condition'], bins_G)
condition_dummy = pd.get_dummies(bins_condition, prefix="new_condition", drop_first=True)
data = data.drop(["condition"], axis=1)
data = pd.concat([data, condition_dummy], axis=1)
###Output
_____no_output_____
###Markdown
Log Transformation: These features have right skewed histograms'sqft_above', 'sqft_lot', 'sqft_living', 'sqft_living15', 'sqft_lot15'
###Code
# Perform log transformation
logabove = np.log(data["sqft_above"])
loglot = np.log(data["sqft_lot"])
logliving = np.log(data["sqft_living"])
loglivingnear = np.log(data["sqft_living15"])
loglotnear = np.log(data["sqft_lot15"])
# Copy the Standardizations into the dataset
data["sqft_above"] = (logabove-np.mean(logabove))/np.sqrt(np.var(logabove))
data["sqft_lot"] = (loglot-np.mean(loglot))/np.sqrt(np.var(loglot))
data["sqft_living"] = (logliving-np.mean(logliving))/np.sqrt(np.var(logliving))
data["sqft_living15"] = (loglivingnear-np.mean(loglivingnear))/np.sqrt(np.var(loglivingnear))
data["sqft_lot15"] = (loglotnear-np.mean(loglotnear))/(np.sqrt(np.var(loglotnear)))
y = pd.DataFrame(data, columns = ['price'])
X = data.drop(["price", "zipcode", 'date', 'floors'], axis=1)
import statsmodels.api as sm
model = sm.OLS(y,X).fit()
model.summary()
# Perform a train-test split
from sklearn.model_selection import train_test_split
X_train, X_test, y_train, y_test = train_test_split(X, y)
# A brief preview of our train test split
print(len(X_train), len(X_test), len(y_train), len(y_test))
# Apply your model to the train set
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
linreg.fit(X_train, y_train)
LinearRegression(copy_X=True, fit_intercept=True, n_jobs=1, normalize=False)
# Calculate predictions on training and test sets
y_hat_train = linreg.predict(X_train)
y_hat_test = linreg.predict(X_test)
# Calculate training and test residuals
train_residuals = y_hat_train - y_train
test_residuals = y_hat_test - y_test
#Calculate the Mean Squared Error (MSE)
from sklearn.metrics import mean_squared_error
train_mse = mean_squared_error(y_train, y_hat_train)
test_mse = mean_squared_error(y_test, y_hat_test)
print('Train Mean Squarred Error:', train_mse)
print('Test Mean Squarred Error:', test_mse)
#Evaluate the effect of train-test split
import random
random.seed(8)
train_err = []
test_err = []
t_sizes = list(range(5,100,5))
for t_size in t_sizes:
temp_train_err = []
temp_test_err = []
for i in range(100):
X_train, X_test, y_train, y_test = train_test_split(X, y, test_size=t_size/100)
linreg.fit(X_train, y_train)
y_hat_train = linreg.predict(X_train)
y_hat_test = linreg.predict(X_test)
temp_train_err.append(mean_squared_error(y_train, y_hat_train))
temp_test_err.append(mean_squared_error(y_test, y_hat_test))
train_err.append(np.mean(temp_train_err))
test_err.append(np.mean(temp_test_err))
plt.scatter(t_sizes, train_err, label='Training Error')
plt.scatter(t_sizes, test_err, label='Testing Error')
plt.legend()
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score
cv_5_results = np.mean(cross_val_score(linreg, X, y, cv=5, scoring='neg_mean_squared_error'))
cv_5_results
###Output
_____no_output_____
###Markdown
Results R-squared: 0.888 ! The P value looks better as well as the test-train split.
###Code
from sklearn.metrics import mean_squared_error
from sklearn.model_selection import cross_val_score
cv_5_results = cross_val_score(linreg, X, y, cv=5, scoring='neg_mean_squared_error')
cv_5_results
bins_yr_renovated.value_counts().plot(kind='bar')
plt.xlabel('Bins of Years Remodeled')
plt.ylabel('Number of Homes')
plt.title("King County Homes Renovated", fontsize=18)
plt.show()
bins_yr_renovated.value_counts()
bins_yr_renovated.describe()
X.columns
# Use feature ranking to select the 5 most important features
from sklearn.feature_selection import RFE
from sklearn.linear_model import LinearRegression
linreg = LinearRegression()
selector = RFE(linreg, n_features_to_select = 5)
selector = selector.fit(X, y.values.ravel()) # convert y to 1d np array to prevent DataConversionWarning
selector.support_
# Top 5 columns
'lat', 'long', 'renovated recently 2000-2008' 'view of 2 or 3', 'new_waterfront'
#Fit the linear regression model again using the 5 selected columns
selected_columns = X.columns[selector.support_ ]
linreg.fit(X[selected_columns],y)
# Predict the y_hat
yhat = linreg.predict(X[selected_columns])
yhat
# Compare and contrast two models with the R-squared and adjusted R-squared
SS_Residual = np.sum((y-yhat)**2)
SS_Total = np.sum((y-np.mean(y))**2)
r_squared = 1 - (float(SS_Residual))/SS_Total
adjusted_r_squared = 1 - (1-r_squared)*(len(y)-1)/(len(y)-X[selected_columns].shape[1]-1)
print(r_squared)
print(adjusted_r_squared)
# Not much difference!
###Output
_____no_output_____ |
docs/tutorials/tfx/template.ipynb | ###Markdown
Copyright © 2020 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
import sys
!{sys.executable} -m pip install --user --upgrade -q tfx==0.26.0
!{sys.executable} -m pip install --user --upgrade -q kfp==1.0.0
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `local_runner.py`, `kubeflow_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `ResolverNode`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `ResolverNode`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`. Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create a TFX pipeline using templates Note: We recommend running this tutorial on Google Cloud AI Platform Notebook. [Launch this notebook on AI Platform Notebook](https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Ftemplate.ipynb).View on TensorFlow.orgRun in Google ColabView source on GitHubDownload notebook IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks. Install `tfx` python package with `kfp` extra requirement.
###Code
import sys
# Use the latest version of pip.
!pip install --upgrade pip
# Install tfx and kfp Python packages.
!pip install --upgrade "tfx[kfp]<2"
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `local_runner.py`, `kubeflow_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create --pipeline-path=kubeflow_runner.py --endpoint={ENDPOINT} \
--build-image
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` will be generated to build a Docker image. Don't forget to add it to the source control system (for example, git) along with other source files.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `Resolver`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Resolver`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** If we changed anything in the model code, we have to rebuild thecontainer image, too. We can trigger rebuild using `--build-image` flag in the`pipeline update` command.**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow). Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create a TFX pipeline using templates Note: We recommend running this tutorial on Google Cloud AI Platform Notebook. [Launch this notebook on AI Platform Notebook](https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Ftemplate.ipynb).View on TensorFlow.orgRun in Google ColabView source on GitHubDownload notebook IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
import sys
!{sys.executable} -m pip install --user --upgrade -q tfx==0.26.0
!{sys.executable} -m pip install --user --upgrade -q kfp==1.0.0
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"deployed_notebook",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `local_runner.py`, `kubeflow_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `ResolverNode`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `ResolverNode`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`. Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright © 2020 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
import sys
!{sys.executable} -m pip install --user --upgrade -q tfx==0.21.4
!{sys.executable} -m pip install --user --upgrade -q kfp==0.4.0
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GCP_PROJECT_ID=shell_output[0]
print("GCP project ID:" + GCP_PROJECT_ID)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image
CUSTOM_TFX_IMAGE='gcr.io/' + GCP_PROJECT_ID + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"AIHub",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `beam_dag_runner.py`, `kubeflow_dag_runner.py` — define runners for each orchestration engineList the files in the project directory:
###Code
!ls
###Output
_____no_output_____
###Markdown
You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. It has a name starting with **`hostedkfp-default-`**.To run this pipeline you **MUST** edit `pipeline/configs.py` to set your GCS bucket name. You can list your current GCS buckets in this GCP project using the `gsutil` command.
###Code
# You can see your buckets using `gsutil`. The following command will show bucket names without prefix and postfix.
!gsutil ls | cut -d / -f 3
###Output
_____no_output_____
###Markdown
```bashgsutil ls```>**Double-click to change directory to `pipeline` and double-click again to open `configs.py`**. Set `GCS_BUCKET_NAME` to the name of the GCS bucket without the `gs://` or `/`. For example, if `gsutil ls` displayed `gs://my-bucket`, you should set `my-bucket`.```GCS_BUCKET_NAME = 'my-bucket'```>**NOTE:You MUST set your GCS bucket name in the `configs.py` file before proceeding.**
###Code
# Let's make sure you have set YOUR bucket name
# DO NOT edit following code. You should set your bucket name in the `pipeline/configs.py` file.
from absl import logging
try:
from pipeline import configs
import imp; imp.reload(configs)
if configs.GCS_BUCKET_NAME == 'YOUR_GCS_BUCKET_NAME':
logging.error('Set your GCS_BUCKET_NAME in the `pipeline/configs.py` file.')
except ImportError:
logging.error('Please make sure that `pipeline/configs.py` file exists.')
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, 'ResolverNode', `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, 'ResolverNode', `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines! Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the project id and the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP project ID and region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Double-click to open `pipeline.py`**. Change the value of `enable_cache` to `False`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).Please reset `enable_cache` to `True` to benefit from caching execution results.>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`. Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create a TFX pipeline using templates Note: We recommend running this tutorial on Google Cloud Vertex AI Workbench. [Launch this notebook on Vertex AI Workbench](https://console.cloud.google.com/vertex-ai/workbench/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Ftemplate.ipynb).View on TensorFlow.orgRun in Google ColabView source on GitHubDownload notebook IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks. Install `tfx` python package with `kfp` extra requirement.
###Code
import sys
# Use the latest version of pip.
!pip install --upgrade pip
# Install tfx and kfp Python packages.
!pip install --upgrade "tfx[kfp]<2"
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `local_runner.py`, `kubeflow_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create --pipeline-path=kubeflow_runner.py --endpoint={ENDPOINT} \
--build-image
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` will be generated to build a Docker image. Don't forget to add it to the source control system (for example, git) along with other source files.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `Resolver`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Resolver`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** If we changed anything in the model code, we have to rebuild thecontainer image, too. We can trigger rebuild using `--build-image` flag in the`pipeline update` command.**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow). Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create a TFX pipeline using templates Note: We recommend running this tutorial on Google Cloud AI Platform Notebook. [Launch this notebook on AI Platform Notebook](https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Ftemplate.ipynb).View on TensorFlow.orgRun in Google ColabView source on GitHubDownload notebook IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks. Install `tfx` python package with `kfp` extra requirement.
###Code
import sys
# Use the latest version of pip.
!pip install --upgrade pip
# Install tfx and kfp Python packages.
!pip install --upgrade tfx[kfp]==0.30.0
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `local_runner.py`, `kubeflow_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create --pipeline-path=kubeflow_runner.py --endpoint={ENDPOINT} \
--build-image
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` will be generated to build a Docker image. Don't forget to add it to the source control system (for example, git) along with other source files.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `Resolver`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Resolver`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** If we changed anything in the model code, we have to rebuild thecontainer image, too. We can trigger rebuild using `--build-image` flag in the`pipeline update` command.**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow). Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright © 2020 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
!pip install --user --upgrade -q tfx==0.21.0
!pip install --user --upgrade -q kfp==0.2.5
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GCP_PROJECT_ID=shell_output[0]
print("GCP project ID:" + GCP_PROJECT_ID)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image
CUSTOM_TFX_IMAGE='gcr.io/' + GCP_PROJECT_ID + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"AIHub",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline_name={PIPELINE_NAME} \
--destination_path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- configs.py: defines common constants for pipeline runners.- pipeline.py: defines TFX components and a pipeline.- beam_dag_runner.py / kubeflow_dag_runner.py: define runners for each orchestration engine. Since you are using Kubeflow you will not use the Beam orchestrator.- features.py / features_test.py: defines and tests features for the model.- hparams.py: defines hyperparameters of the model.- preprocessing.py / preprocessing_test.py: defines preprocessing jobs using tf::Transform.- model.py / model_test.py: defines a DNN model using TF estimator.List the files in the project directory:
###Code
!ls
###Output
_____no_output_____
###Markdown
You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests simply by supplying test files to the `python` binary. For example:
###Code
!python3 features_test.py
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. It has a name starting with **`hostedkfp-default-`**.To run this pipeline you **MUST** edit `configs.py` to set your GCS bucket name. You can list your current GCS buckets in this GCP project using the `gsutil` command.
###Code
# You can see your buckets using `gsutil`. Following command will show bucket names without prefix and postfix.
!gsutil ls | cut -d / -f 3
###Output
_____no_output_____
###Markdown
```bashgsutil ls```>**Double-click to open `configs.py`**. Set `GCS_BUCKET_NAME` to the name of the GCS bucket without the `gs://` or `/`. For example, if `gsutil ls` displayed `gs://my-bucket`, you should set `my-bucket`.```GCS_BUCKET_NAME = 'my-bucket'```>**NOTE:You MUST set your GCS bucket name in the `configs.py` file before proceeding.** Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `ModelValidator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Evaluator`, `ModelValidator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines! Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `import` statement and the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `BIG_QUERY_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the project id and the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP project ID and region in the `configs.py` file before proceeding.**>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, and `BEAM_PIPELINE_ARGS`.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow). Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright © 2020 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
!pip install --user --upgrade -q tfx==0.21.2
!pip install --user --upgrade -q kfp==0.2.5
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GCP_PROJECT_ID=shell_output[0]
print("GCP project ID:" + GCP_PROJECT_ID)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image
CUSTOM_TFX_IMAGE='gcr.io/' + GCP_PROJECT_ID + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"AIHub",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline_name={PIPELINE_NAME} \
--destination_path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- configs.py: defines common constants for pipeline runners.- pipeline.py: defines TFX components and a pipeline.- beam_dag_runner.py / kubeflow_dag_runner.py: define runners for each orchestration engine. Since you are using Kubeflow you will not use the Beam orchestrator.- features.py / features_test.py: defines and tests features for the model.- hparams.py: defines hyperparameters of the model.- preprocessing.py / preprocessing_test.py: defines preprocessing jobs using tf::Transform.- model.py / model_test.py: defines a DNN model using TF estimator.List the files in the project directory:
###Code
!ls
###Output
_____no_output_____
###Markdown
You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests simply by supplying test files to the `python` binary. For example:
###Code
!python3 features_test.py
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. It has a name starting with **`hostedkfp-default-`**.To run this pipeline you **MUST** edit `configs.py` to set your GCS bucket name. You can list your current GCS buckets in this GCP project using the `gsutil` command.
###Code
# You can see your buckets using `gsutil`. The following command will show bucket names without prefix and postfix.
!gsutil ls | cut -d / -f 3
###Output
_____no_output_____
###Markdown
```bashgsutil ls```>**Double-click to open `configs.py`**. Set `GCS_BUCKET_NAME` to the name of the GCS bucket without the `gs://` or `/`. For example, if `gsutil ls` displayed `gs://my-bucket`, you should set `my-bucket`.```GCS_BUCKET_NAME = 'my-bucket'```>**NOTE:You MUST set your GCS bucket name in the `configs.py` file before proceeding.** Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines! Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `import` statement and the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `BIG_QUERY_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the project id and the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP project ID and region in the `configs.py` file before proceeding.**>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, and `BEAM_PIPELINE_ARGS`.>**Double-click to open `pipeline.py`**. Change the value of `enable_cache` to `False`.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).Please reset `enable_cache` to `True` to benefit from caching execution results.>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`. Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create a TFX pipeline using templates Note: We recommend running this tutorial on Google Cloud AI Platform Notebook. [Launch this notebook on AI Platform Notebook](https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Ftemplate.ipynb).View on TensorFlow.orgRun in Google ColabView source on GitHubDownload notebook IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks. Install `tfx` python package with `kfp` extra requirement.
###Code
import sys
# Use the latest version of pip.
!pip install --upgrade -q pip
# Install tfx and kfp Python packages.
!pip install --upgrade -q tfx[kfp]==0.30.0
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"deployed_notebook",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `local_runner.py`, `kubeflow_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create --pipeline-path=kubeflow_runner.py --endpoint={ENDPOINT} \
--build-image
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` will be generated to build a Docker image. Don't forget to add it to the source control system (for example, git) along with other source files.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `Resolver`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Resolver`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** If we changed anything in the model code, we have to rebuild thecontainer image, too. We can trigger rebuild using `--build-image` flag in the`pipeline update` command.**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow). Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright © 2020 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
import sys
!{sys.executable} -m pip install --user --upgrade -q tfx==0.22.0
!{sys.executable} -m pip install --user --upgrade -q kfp==0.5.1
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `beam_dag_runner.py`, `kubeflow_dag_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, 'ResolverNode', `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, 'ResolverNode', `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines! Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Double-click to open `pipeline.py`**. Change the value of `enable_cache` to `False`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Note that we deliberately disabled caching. Because we have already run the pipeline successfully, we will get cached execution result for all components if cache is enabled.Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).Please reset `enable_cache` to `True` to benefit from caching execution results.>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`. Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Most of instructions are Linux shell commands, and correspondingJupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try to buildyour OWN pipeline using your OWN dataset by utilizing this pipeline as a baseline. Prerequisites* Linux* Python >= 3.5.3* [Docker Engine](https://docs.docker.com/install/)You can get all prerequisites easily by [launching this notebook on Google Cloud Platform AI Platform Notebook](https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Ftemplate.ipynb) Step 1. Setup your environment.You should prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline. 1a. Development environment On your local machineInstall `tfx` and `kfp` python packages. `kfp` is required to use Kubeflow Pipeline(KFP) as an orchestrator engine.You also need to download `skaffold`. `skaffold` is a tool to build docker images easily. A custom docker image will be used when running a pipeline on KFP.There are a couple of Notebook files in the template, and a Jupyter Notebook kernel with this virtualenv is required to run them.You can use following shell script snippet to setup your environment.```sh Create a virtualenv for tfx.virtualenv -p python3 venvsource venv/bin/activate Install python packages.pip install tfx kfp Download skaffold.curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64chmod +x skaffoldmv skaffold venv/bin/ Install a Jupyter Notebook kernel for this virtualenv.python -m ipykernel install --user --name=tfx``` On Cloud AI Platform NotebookIf you are using Cloud AI Platform Notebook, create a TensorFlow pre-installed instance for the notebook.Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.NOTE: There might be some errors during package installation. For example, "ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment.TODO(b/149346490): TFX team is preparing a base image which includes tfx, kfp and skaffold by default. You won't have to install packages in this section in the near future.
###Code
# Install tfx and kfp Python packages.
!pip3 install --user --upgrade -q tfx
!pip3 install --user --upgrade -q kfp
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the version of TFX.```bashpython -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"```
###Code
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
###Output
_____no_output_____
###Markdown
1b. Kubeflow Pipeline cluster TFX pipeline can be run on Kubernetes using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/). If you don't have one, you can [create a Kubeflow Pipeline cluster on GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up).This tutorial assumes that the cluster runs on GCP.You should be logged in to cloud services to use cloud APIs. If you are using Google Cloud AI Platform Notebook, you are automatically logged in to GCP. Otherwise, you should be logged in using [gcloud utility](https://cloud.google.com/sdk/gcloud/reference/auth/login).Let's set some environment variables to use Kubeflow Pipeline.First, make sure what your GCP project ID is. If you are using terminal environment, You can find you project ID and set it to an environment variable with following command.```bashexport GCP_PROJECT_ID=$(gcloud config list --format 'value(core.project)' 2>/dev/null)```
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GCP_PROJECT_ID=shell_output[0]
print("GCP project ID:" + GCP_PROJECT_ID)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard. Let's set the endpoint to `ENDPOINT` envrionment variable. ENDPOINT should contain only the host part of the URL. For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.```bashexport ENDPOINT=XXXXXXX.pipelines.googleusercontent.com``` Note: You MUST set your ENDPOINT value below.
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
As mentioned above, we will use a custom docker image to run pipeline on KFP. This docker image should be hosted on a docker registry, and we recommend Google Container Registry(gcr.io). Please set `CUSTOM_TFX_IMAGE` environment variable to an appropriate image name. For example, following command sets the image name as `tfx-pipeline` under the current GCP project.```bashexport CUSTOM_TFX_IMAGE=gcr.io/${GCP_PROJECT_ID}/tfx-pipeline```
###Code
# Docker image name for the pipeline image
CUSTOM_TFX_IMAGE='gcr.io/' + GCP_PROJECT_ID + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy predefined template to your project directory.In this step, we will create a working pipeline project by copying from a predefined template.Please decide a name for the new pipeline and a project directory to put your files in.Let's Define environment variables for these.```bashexport PIPELINE_NAME="my_pipeline"export PROJECT_DIR=~/tfx/${PIPELINE_NAME}```
###Code
PIPELINE_NAME="my_pipeline"
import os
CURRENT_DIR=%pwd
PROJECT_DIR=os.path.join(CURRENT_DIR,PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX provides provides `taxi` template with tfx python package. If you are planning to solve a point-wise prediction problem including classification and regresssion, this template could be used as a starting point.Use `tfx` cli to copy predefined template to your project directory.```shtfx template copy \ --pipeline_name="${PIPELINE_NAME}" \ --destination_path="${PROJECT_DIR}" \ --model=taxi```
###Code
!tfx template copy \
--pipeline_name={PIPELINE_NAME} \
--destination_path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change working directory to the project directory which contains generated files.```bashcd ${PROJECT_DIR}```
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
If you are using Cloud AI Platform Notebook, Don't forget to change directory in `File Browser` on the left side of the screen, too. Step 3. Browse your copied source files. TFX template provides basic scaffold files to build a pipeline, including python source codes, sample data and Jupyter Notebook files to analysis the output of the pipeline. `taxi` template uses the same *Chicago Taxi* dataset and ML model with [Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each python files.- configs.py: defines common constants for pipeline runners.- pipeline.py: defines TFX components and a pipeline.- beam_dag_runner.py / kubeflow_dag_runner.py: define runners for each orchestration engine.- features.py / features_test.py: defines features for the model.- hparams.py: defines hyperparameters of the model.- preprocessing.py / preprocessing_test.py: defines preprocessing jobs using tf::Transform.- model.py / model_test.py: defines DNN model using TF estimator.
###Code
!ls
###Output
_____no_output_____
###Markdown
You might notice that there are some files with `_test.py` in their name. They are unit tests of the pipeline and it is recommended to add more unit tests as you implement your model.You can try to run unit tests simply by supplying test files to `python` binary.```bashpython features_test.py```
###Code
!python3 features_test.py
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineCopied pipeline can be run using `tfx` cli. In this step, we will create pipelines using two orchestrator engines, Beam and Kubeflow. 4a. Using Beam orchestrator[Apache Beam](https://beam.apache.org/) can be used as an orchestrating engine for the pipeline without additional configuration.You can create a pipeline using `pipeline create` command.```bashtfx pipeline create --engine=beam --pipeline_path=beam_dag_runner.py```Then, you can run the created pipeline using `run create` command.```shtfx run create --engine=beam --pipeline_name="${PIPELINE_NAME}"```If successful, you'll see `Component CsvExampleGen is finished.` When you copy the template, only one component, CsvExampleGen, is included in the pipeline. Beam orchestrator is useful for local experiments, but a production pipeline usually requires more scalable and stable running environments like, for example, Kubernetes. 4b. Using Kubeflow orchestrator Components in the TFX pipeline will generate outputs for each run, and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and we will use Google Cloud Storage(GCS) in this document. If you created a KFP cluster in GCP, a default GCS bucket should have been created automatically. It has a name starting with `hostedkfp-default-`.To run this pipeline in KFP, you should edit `configs.py` to set your GCS bucket name. You can see your GCS buckets using `gsutil` command.
###Code
# You can see your buckets using `gsutil`. Following command will show bucket names without prefix and postfix.
!gsutil ls | cut -d / -f 3
###Output
_____no_output_____
###Markdown
```bashgsutil ls```Set `GCS_BUCKET_NAME` in `configs.py` without `gs://` or `/`. For example, if `gsutil ls` displayed `gs://my-bucket`, you should set `my-bucket`.```GCS_BUCKET_NAME = 'my-bucket'``` Note: You MUST set your GCS bucket name in the `configs.py` file before proceed. Let's create a pipeline on KFP.```bashtfx pipeline create \--pipeline_path=kubeflow_dag_runner.py \--endpoint=${ENDPOINT} \--build_target_image=${CUSTOM_TFX_IMAGE}``` Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a docker image. Don't forget to add these files to the source control system(for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control. Because it will be generated from other python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified. Then, you can run the created pipeline using `run create` command.```shtfx run create --pipeline_name="${PIPELINE_NAME}" --endpoint=${ENDPOINT}```
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can run the pipeline on the KFP Dashboard, too.You can see the run using `run list` or `run status` command.```shtfx run list --pipeline_name="${PIPELINE_NAME}" --endpoint=${ENDPOINT}``` However, we recommend visiting your KFP Dashboard using Web Browser. If you launched your KFP cluster in GCP, you can access KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, the run and many more information about the pipeline.For example, you can find your runs under *Experiments* menu, and you can find all your artifacts from the pipeline under *Artifacts* menu. Note: If your pipeline run fails, you can see detailed logs in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including StatisticsGen, SchemaGen, and ExampleValidator. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).Open `pipeline.py` with an editor. Find and uncomment 3 lines which add StatisticsGen, SchemaGen, and ExampleValidator to the pipeline. (Tip: search `TODO(step 5):`)You need to update existing pipeline with modified pipeline definition. Use `pipeline update` command with `tfx` cli.If you are using beam orchestrator,```sh Update the pipelinetfx pipeline update --engine=beam --pipeline_path=beam_dag_runner.py You can run the pipeline the same way.tfx run create --engine beam --pipeline_name "${PIPELINE_NAME}"```If you are using Kubeflow orchestrator,```sh Update the pipelinetfx pipeline update \--pipeline_path=kubeflow_dag_runner.py \--endpoint=${ENDPOINT} You can run the pipeline the same way.tfx run create --pipeline_name "${PIPELINE_NAME}"```
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsIf you are using Beam orchestrator, open `data_validation.ipynb` with Jupyter Notebook.For Kubeflow Orchestrator, visit KFP dashboard and you can find pipeline outputs in the page for your pipeline run. Click "Experiments" tab on the left, and "All runs" in the Experiments page. You should be able to find the run with the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including Transform, Trainer, ModelValidator and Pusher. These components are implementing basic ML model using simple DNN. You can find more details about the model in [Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Open `pipeline.py` with an editor. Find and uncomment 4 lines which add Transform, Trainer, ModelValidator and Pusher to the pipeline. (Tip: search `TODO(step 6):`)You need to update existing pipeline with modified pipeline definition, again. Updating instruction is the same as Step 5. Please update the pipeline using `pipeline update` and create a run using `run create`.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
If you are not using Cloud AI Platform Notebook, check the newly trained model with `model_analysis.ipynb` notebook. TFMA Jupyter extension is required to see the visualization. See instructions in the notebook file.NOTE: This notebook file doesn't work on Cloud AI Platform Notebook or other JupyterLab environments. Step 7. (*Optional*) Try BigQueryExampleGen.[BigQuery] is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.Open `pipeline.py` with an editor. Comment out `CsvExampleGen` and uncomment the line which create an instance of `BigQueryExampleGen`. You also need to uncomment `import` statement and `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline. open `configs.py` and uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `BIG_QUERY_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the project id and the region value in this file. Note: You MUST set your GCP project ID and region in the `configs.py` file before proceed.Lastly, open `kubeflow_dag_runner.py` (or `beam_dag_runner.py` if you'll use Beam orchestrator) and uncomment two arguments, `query` and `beam_pipeline_args`, for create_pipeline() method.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline and create a run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFP.Several [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as a data processing back-end of a Apache Beam.Open `configs.py` with an editor, and uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, and `BEAM_PIPELINE_ARGS`. Open `kubeflow_dag_runner.py` and uncomment `beam_pipeline_args`. (Comment out current `beam_pipeline_args` what you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create a run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow). Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFP.TFX interoperates with serveral managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your Trainer component to use Cloud AI Platform Training, a managed service for ML training workload. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.Before editing files, you might have to enable [AI Platform Training & Prediction API] first.Open `configs.py` with an editor, and uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.Open `kubeflow_dag_runner.py` and uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create a run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright © 2020 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
!pip install --user --upgrade -q tfx==0.21.0
!pip install --user --upgrade -q kfp==0.2.5
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GCP_PROJECT_ID=shell_output[0]
print("GCP project ID:" + GCP_PROJECT_ID)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image
CUSTOM_TFX_IMAGE='gcr.io/' + GCP_PROJECT_ID + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"AIHub",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline_name={PIPELINE_NAME} \
--destination_path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- configs.py: defines common constants for pipeline runners.- pipeline.py: defines TFX components and a pipeline.- beam_dag_runner.py / kubeflow_dag_runner.py: define runners for each orchestration engine. Since you are using Kubeflow you will not use the Beam orchestrator.- features.py / features_test.py: defines and tests features for the model.- hparams.py: defines hyperparameters of the model.- preprocessing.py / preprocessing_test.py: defines preprocessing jobs using tf::Transform.- model.py / model_test.py: defines a DNN model using TF estimator.List the files in the project directory:
###Code
!ls
###Output
_____no_output_____
###Markdown
You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests simply by supplying test files to the `python` binary. For example:
###Code
!python3 features_test.py
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. It has a name starting with **`hostedkfp-default-`**.To run this pipeline you **MUST** edit `configs.py` to set your GCS bucket name. You can list your current GCS buckets in this GCP project using the `gsutil` command.
###Code
# You can see your buckets using `gsutil`. Following command will show bucket names without prefix and postfix.
!gsutil ls | cut -d / -f 3
###Output
_____no_output_____
###Markdown
```bashgsutil ls```>**Double-click to open `configs.py`**. Set `GCS_BUCKET_NAME` to the name of the GCS bucket without the `gs://` or `/`. For example, if `gsutil ls` displayed `gs://my-bucket`, you should set `my-bucket`.```GCS_BUCKET_NAME = 'my-bucket'```>**NOTE:You MUST set your GCS bucket name in the `configs.py` file before proceeding.** Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build_target_image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline_name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines! Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `import` statement and the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `BIG_QUERY_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the project id and the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP project ID and region in the `configs.py` file before proceeding.**>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, and `BEAM_PIPELINE_ARGS`.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow). Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click to open `configs.py`**. Uncomment the definition of `GCP_PROJECT_ID`, `GCP_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline_path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline_name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright 2020 The TensorFlow Authors.
###Code
#@title Licensed under the Apache License, Version 2.0 (the "License");
# you may not use this file except in compliance with the License.
# You may obtain a copy of the License at
#
# https://www.apache.org/licenses/LICENSE-2.0
#
# Unless required by applicable law or agreed to in writing, software
# distributed under the License is distributed on an "AS IS" BASIS,
# WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied.
# See the License for the specific language governing permissions and
# limitations under the License.
###Output
_____no_output_____
###Markdown
Create a TFX pipeline using templates Note: We recommend running this tutorial on Google Cloud AI Platform Notebook. [Launch this notebook on AI Platform Notebook](https://console.cloud.google.com/mlengine/notebooks/deploy-notebook?q=download_url%3Dhttps%253A%252F%252Fraw.githubusercontent.com%252Ftensorflow%252Ftfx%252Fmaster%252Fdocs%252Ftutorials%252Ftfx%252Ftemplate.ipynb).View on TensorFlow.orgRun in Google ColabView source on GitHubDownload notebook IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
import sys
!{sys.executable} -m pip install --user --upgrade -q tfx==0.26.0
!{sys.executable} -m pip install --user --upgrade -q kfp==1.0.0
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "from tfx import version ; print('TFX version: {}'.format(version.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"deployed_notebook",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `local_runner.py`, `kubeflow_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/taxi/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `Resolver`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `Resolver`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`. Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Copyright © 2020 The TensorFlow Authors. Licensed under the Apache License, Version 2.0 (the \"License\");you may not use this file except in compliance with the License.You may obtain a copy of the License at [https://www.apache.org/licenses/LICENSE-2.0](https://www.apache.org/licenses/LICENSE-2.0)Unless required by applicable law or agreed to in writing, software distributed under the License is distributed on an \"AS IS\" BASIS, WITHOUT WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the License for the specific language governing permissions and limitations under the License. Create a TFX pipeline using templates IntroductionThis document will provide instructions to create a TensorFlow Extended (TFX) pipelineusing *templates* which are provided with TFX Python package.Many of the instructions are Linux shell commands, which will run on an AI Platform Notebooks instance. Corresponding Jupyter Notebook code cells which invoke those commands using `!` are provided.You will build a pipeline using [Taxi Trips dataset](https://data.cityofchicago.org/Transportation/Taxi-Trips/wrvz-psew)released by the City of Chicago. We strongly encourage you to try buildingyour own pipeline using your dataset by utilizing this pipeline as a baseline. Step 1. Set up your environment.AI Platform Pipelines will prepare a development environment to build a pipeline, and a Kubeflow Pipeline cluster to run the newly built pipeline.**NOTE:** To select a particular TensorFlow version, or select a GPU instance, create a TensorFlow pre-installed instance in AI Platform Notebooks.**NOTE:** There might be some errors during package installation. For example: >"ERROR: some-package 0.some_version.1 has requirement other-package!=2.0.,<3,>=1.15, but you'll have other-package 2.0.0 which is incompatible." Please ignore these errors at this moment. Install `tfx`, `kfp`, and `skaffold`, and add installation path to the `PATH` environment variable.
###Code
# Install tfx and kfp Python packages.
import sys
!{sys.executable} -m pip install --user --upgrade -q tfx==0.23.0
!{sys.executable} -m pip install --user --upgrade -q kfp==1.0.0
# Download skaffold and set it executable.
!curl -Lo skaffold https://storage.googleapis.com/skaffold/releases/latest/skaffold-linux-amd64 && chmod +x skaffold && mv skaffold /home/jupyter/.local/bin/
# Set `PATH` to include user python binary directory and a directory containing `skaffold`.
PATH=%env PATH
%env PATH={PATH}:/home/jupyter/.local/bin
###Output
_____no_output_____
###Markdown
Let's check the versions of TFX.
###Code
!python3 -c "import tfx; print('TFX version: {}'.format(tfx.__version__))"
###Output
_____no_output_____
###Markdown
In AI Platform Pipelines, TFX is running in a hosted Kubernetes environment using [Kubeflow Pipelines](https://www.kubeflow.org/docs/pipelines/overview/pipelines-overview/).Let's set some environment variables to use Kubeflow Pipelines.First, get your GCP project ID.
###Code
# Read GCP project id from env.
shell_output=!gcloud config list --format 'value(core.project)' 2>/dev/null
GOOGLE_CLOUD_PROJECT=shell_output[0]
%env GOOGLE_CLOUD_PROJECT={GOOGLE_CLOUD_PROJECT}
print("GCP project ID:" + GOOGLE_CLOUD_PROJECT)
###Output
_____no_output_____
###Markdown
We also need to access your KFP cluster. You can access it in your Google Cloud Console under "AI Platform > Pipeline" menu. The "endpoint" of the KFP cluster can be found from the URL of the Pipelines dashboard, or you can get it from the URL of the Getting Started page where you launched this notebook. Let's create an `ENDPOINT` environment variable and set it to the KFP cluster endpoint. **ENDPOINT should contain only the hostname part of the URL.** For example, if the URL of the KFP dashboard is `https://1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com//start`, ENDPOINT value becomes `1e9deb537390ca22-dot-asia-east1.pipelines.googleusercontent.com`.>**NOTE: You MUST set your ENDPOINT value below.**
###Code
# This refers to the KFP cluster endpoint
ENDPOINT='' # Enter your ENDPOINT here.
if not ENDPOINT:
from absl import logging
logging.error('Set your ENDPOINT in this cell.')
###Output
_____no_output_____
###Markdown
Set the image name as `tfx-pipeline` under the current GCP project.
###Code
# Docker image name for the pipeline image.
CUSTOM_TFX_IMAGE='gcr.io/' + GOOGLE_CLOUD_PROJECT + '/tfx-pipeline'
###Output
_____no_output_____
###Markdown
And, it's done. We are ready to create a pipeline. Step 2. Copy the predefined template to your project directory.In this step, we will create a working pipeline project directory and files by copying additional files from a predefined template.You may give your pipeline a different name by changing the `PIPELINE_NAME` below. This will also become the name of the project directory where your files will be put.
###Code
PIPELINE_NAME="my_pipeline"
import os
PROJECT_DIR=os.path.join(os.path.expanduser("~"),"imported",PIPELINE_NAME)
###Output
_____no_output_____
###Markdown
TFX includes the `taxi` template with the TFX python package. If you are planning to solve a point-wise prediction problem, including classification and regresssion, this template could be used as a starting point.The `tfx template copy` CLI command copies predefined template files into your project directory.
###Code
!tfx template copy \
--pipeline-name={PIPELINE_NAME} \
--destination-path={PROJECT_DIR} \
--model=taxi
###Output
_____no_output_____
###Markdown
Change the working directory context in this notebook to the project directory.
###Code
%cd {PROJECT_DIR}
###Output
_____no_output_____
###Markdown
>NOTE: Don't forget to change directory in `File Browser` on the left by clicking into the project directory once it is created. Step 3. Browse your copied source filesThe TFX template provides basic scaffold files to build a pipeline, including Python source code, sample data, and Jupyter Notebooks to analyse the output of the pipeline. The `taxi` template uses the same *Chicago Taxi* dataset and ML model as the [Airflow Tutorial](https://www.tensorflow.org/tfx/tutorials/tfx/airflow_workshop).Here is brief introduction to each of the Python files.- `pipeline` - This directory contains the definition of the pipeline - `configs.py` — defines common constants for pipeline runners - `pipeline.py` — defines TFX components and a pipeline- `models` - This directory contains ML model definitions. - `features.py`, `features_test.py` — defines features for the model - `preprocessing.py`, `preprocessing_test.py` — defines preprocessing jobs using `tf::Transform` - `estimator` - This directory contains an Estimator based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using TF estimator - `keras` - This directory contains a Keras based model. - `constants.py` — defines constants of the model - `model.py`, `model_test.py` — defines DNN model using Keras- `beam_dag_runner.py`, `kubeflow_dag_runner.py` — define runners for each orchestration engine You might notice that there are some files with `_test.py` in their name. These are unit tests of the pipeline and it is recommended to add more unit tests as you implement your own pipelines.You can run unit tests by supplying the module name of test files with `-m` flag. You can usually get a module name by deleting `.py` extension and replacing `/` with `.`. For example:
###Code
!{sys.executable} -m models.features_test
!{sys.executable} -m models.keras.model_test
###Output
_____no_output_____
###Markdown
Step 4. Run your first TFX pipelineComponents in the TFX pipeline will generate outputs for each run as [ML Metadata Artifacts](https://www.tensorflow.org/tfx/guide/mlmd), and they need to be stored somewhere. You can use any storage which the KFP cluster can access, and for this example we will use Google Cloud Storage (GCS). A default GCS bucket should have been created automatically. Its name will be `-kubeflowpipelines-default`. Let's upload our sample data to GCS bucket so that we can use it in our pipeline later.
###Code
!gsutil cp data/data.csv gs://{GOOGLE_CLOUD_PROJECT}-kubeflowpipelines-default/tfx-template/data/data.csv
###Output
_____no_output_____
###Markdown
Let's create a TFX pipeline using the `tfx pipeline create` command.>Note: When creating a pipeline for KFP, we need a container image which will be used to run our pipeline. And `skaffold` will build the image for us. Because skaffold pulls base images from the docker hub, it will take 5~10 minutes when we build the image for the first time, but it will take much less time from the second build.
###Code
!tfx pipeline create \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT} \
--build-target-image={CUSTOM_TFX_IMAGE}
###Output
_____no_output_____
###Markdown
While creating a pipeline, `Dockerfile` and `build.yaml` will be generated to build a Docker image. Don't forget to add these files to the source control system (for example, git) along with other source files.A pipeline definition file for [argo](https://argoproj.github.io/argo/) will be generated, too. The name of this file is `${PIPELINE_NAME}.tar.gz`. For example, it will be `my_pipeline.tar.gz` if the name of your pipeline is `my_pipeline`. It is recommended NOT to include this pipeline definition file into source control, because it will be generated from other Python files and will be updated whenever you update the pipeline. For your convenience, this file is already listed in `.gitignore` which is generated automatically.NOTE: `kubeflow` will be automatically selected as an orchestration engine if `airflow` is not installed and `--engine` is not specified.Now start an execution run with the newly created pipeline using the `tfx run create` command.
###Code
!tfx run create --pipeline-name={PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Or, you can also run the pipeline in the KFP Dashboard. The new execution run will be listed under Experiments in the KFP Dashboard. Clicking into the experiment will allow you to monitor progress and visualize the artifacts created during the execution run. However, we recommend visiting the KFP Dashboard. You can access the KFP Dashboard from the Cloud AI Platform Pipelines menu in Google Cloud Console. Once you visit the dashboard, you will be able to find the pipeline, and access a wealth of information about the pipeline.For example, you can find your runs under the *Experiments* menu, and when you open your execution run under Experiments you can find all your artifacts from the pipeline under *Artifacts* menu.>Note: If your pipeline run fails, you can see detailed logs for each TFX component in the Experiments tab in the KFP Dashboard. One of the major sources of failure is permission related problems. Please make sure your KFP cluster has permissions to access Google Cloud APIs. This can be configured [when you create a KFP cluster in GCP](https://cloud.google.com/ai-platform/pipelines/docs/setting-up), or see [Troubleshooting document in GCP](https://cloud.google.com/ai-platform/pipelines/docs/troubleshooting). Step 5. Add components for data validation.In this step, you will add components for data validation including `StatisticsGen`, `SchemaGen`, and `ExampleValidator`. If you are interested in data validation, please see [Get started with Tensorflow Data Validation](https://www.tensorflow.org/tfx/data_validation/get_started).>**Double-click to change directory to `pipeline` and double-click again to open `pipeline.py`**. Find and uncomment the 3 lines which add `StatisticsGen`, `SchemaGen`, and `ExampleValidator` to the pipeline. (Tip: search for comments containing `TODO(step 5):`). Make sure to save `pipeline.py` after you edit it.You now need to update the existing pipeline with modified pipeline definition. Use the `tfx pipeline update` command to update your pipeline, followed by the `tfx run create` command to create a new execution run of your updated pipeline.
###Code
# Update the pipeline
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
# You can run the pipeline the same way.
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Check pipeline outputsVisit the KFP dashboard to find pipeline outputs in the page for your pipeline run. Click the *Experiments* tab on the left, and *All runs* in the Experiments page. You should be able to find the latest run under the name of your pipeline. Step 6. Add components for training.In this step, you will add components for training and model validation including `Transform`, `Trainer`, `ResolverNode`, `Evaluator`, and `Pusher`.>**Double-click to open `pipeline.py`**. Find and uncomment the 5 lines which add `Transform`, `Trainer`, `ResolverNode`, `Evaluator` and `Pusher` to the pipeline. (Tip: search for `TODO(step 6):`)As you did before, you now need to update the existing pipeline with the modified pipeline definition. The instructions are the same as Step 5. Update the pipeline using `tfx pipeline update`, and create an execution run using `tfx run create`.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
When this execution run finishes successfully, you have now created and run your first TFX pipeline in AI Platform Pipelines!**NOTE:** You might have noticed that every time we create a pipeline run, every component runs again and again even though the input and the parameters were not changed.It is waste of time and resources, and you can skip those executions with pipeline caching. You can enable caching by specifying `enable_cache=True` for the `Pipeline` object in `pipeline.py`. Step 7. (*Optional*) Try BigQueryExampleGen[BigQuery](https://cloud.google.com/bigquery) is a serverless, highly scalable, and cost-effective cloud data warehouse. BigQuery can be used as a source for training examples in TFX. In this step, we will add `BigQueryExampleGen` to the pipeline.>**Double-click to open `pipeline.py`**. Comment out `CsvExampleGen` and uncomment the line which creates an instance of `BigQueryExampleGen`. You also need to uncomment the `query` argument of the `create_pipeline` function.We need to specify which GCP project to use for BigQuery, and this is done by setting `--project` in `beam_pipeline_args` when creating a pipeline.>**Double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `BIG_QUERY_WITH_DIRECT_RUNNER_BEAM_PIPELINE_ARGS` and `BIG_QUERY_QUERY`. You should replace the region value in this file with the correct values for your GCP project.>**Note: You MUST set your GCP region in the `configs.py` file before proceeding.**>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment two arguments, `query` and `beam_pipeline_args`, for the `create_pipeline` function.Now the pipeline is ready to use BigQuery as an example source. Update the pipeline as before and create a new execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
Step 8. (*Optional*) Try Dataflow with KFPSeveral [TFX Components uses Apache Beam](https://www.tensorflow.org/tfx/guide/beam) to implement data-parallel pipelines, and it means that you can distribute data processing workloads using [Google Cloud Dataflow](https://cloud.google.com/dataflow/). In this step, we will set the Kubeflow orchestrator to use dataflow as the data processing back-end for Apache Beam.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, and `DATAFLOW_BEAM_PIPELINE_ARGS`.>**Change directory one level up.** Click the name of the directory above the file list. The name of the directory is the name of the pipeline which is `my_pipeline` if you didn't change.>**Double-click to open `kubeflow_dag_runner.py`**. Uncomment `beam_pipeline_args`. (Also make sure to comment out current `beam_pipeline_args` that you added in Step 7.)Now the pipeline is ready to use Dataflow. Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____
###Markdown
You can find your Dataflow jobs in [Dataflow in Cloud Console](http://console.cloud.google.com/dataflow).>**Double-click to open `pipeline.py`**. Reset the value of `enable_cache` to `True`. Step 9. (*Optional*) Try Cloud AI Platform Training and Prediction with KFPTFX interoperates with several managed GCP services, such as [Cloud AI Platform for Training and Prediction](https://cloud.google.com/ai-platform/). You can set your `Trainer` component to use Cloud AI Platform Training, a managed service for training ML models. Moreover, when your model is built and ready to be served, you can *push* your model to Cloud AI Platform Prediction for serving. In this step, we will set our `Trainer` and `Pusher` component to use Cloud AI Platform services.>Before editing files, you might first have to enable *AI Platform Training & Prediction API*.>**Double-click `pipeline` to change directory, and double-click to open `configs.py`**. Uncomment the definition of `GOOGLE_CLOUD_REGION`, `GCP_AI_PLATFORM_TRAINING_ARGS` and `GCP_AI_PLATFORM_SERVING_ARGS`. We will use our custom built container image to train a model in Cloud AI Platform Training, so we should set `masterConfig.imageUri` in `GCP_AI_PLATFORM_TRAINING_ARGS` to the same value as `CUSTOM_TFX_IMAGE` above.>**Change directory one level up, and double-click to open `kubeflow_dag_runner.py`**. Uncomment `ai_platform_training_args` and `ai_platform_serving_args`.Update the pipeline and create an execution run as we did in step 5 and 6.
###Code
!tfx pipeline update \
--pipeline-path=kubeflow_dag_runner.py \
--endpoint={ENDPOINT}
!tfx run create --pipeline-name {PIPELINE_NAME} --endpoint={ENDPOINT}
###Output
_____no_output_____ |
JupyterNotebooks/2019/Example - Making a circle.ipynb | ###Markdown
Making a CircleLets look at an example of making a circle and go line by line
###Code
%matplotlib inline
###Output
_____no_output_____
###Markdown
This line is Jupyter Notebook specific. It makes the notebook use Matplotlib as the plotting library and allows for interactive plots
###Code
import numpy as np
###Output
_____no_output_____
###Markdown
This line imports Numpy, the numerical python programming package. We rename numpy to np. We renamed it np for two reasons: 1) its faster to type than numpy. 2) It is the internets convention, so if you go to look at other peoples code, it will be easier to use.
###Code
import matplotlib.pyplot as plt
###Output
_____no_output_____
###Markdown
Next we import the pyplot package from matplotlib. This is what we will use for ploting. The next think we need to do is define a circle. We know that circles can be represented mathematically in 2 forms. $$X^2 +Y^2 = R^2$$$$r e^{i\theta} = r (cos(\theta)+i*sin(\theta))$$and$$X = rcos(\theta) , Y = rsin(\theta)$$We need X and Y in order to plot, so clearly the easiest solution is to go from thetas and map into X and Y for plotting
###Code
thetas = np.arange(-np.pi,np.pi,0.01)
###Output
_____no_output_____
###Markdown
This line generates an array of points between $-\pi $ to $ \pi$ in increments of 0.01
###Code
X = np.cos(thetas)
###Output
_____no_output_____
###Markdown
This line applies the cosine function to every element of the array thetas, and makes a new array we call X
###Code
Y = np.sin(thetas)
###Output
_____no_output_____
###Markdown
This line does the same but with the sin function Next we plot the results. We call plt.plot with an array of x values and and equal sized array of y value points. This gives us the graph. We then use the axis('equal') command to make graph show a circle as a circle instead of an elipse. Finally we show the graph
###Code
plt.plot(X,Y)
plt.axis('equal')
plt.show()
###Output
_____no_output_____ |
2. Jupyter Notebooks/9.4.6_Two_Sum_solution.ipynb | ###Markdown
Two Sum problem You have an list of integers, return the indices of the pair of integers that add up to a given target Each input has one solution, Example [8,6,11,3] target 9 return [1,3]
###Code
def two_sum(nums, target):
d = {}
for i in range(len(nums)):
if target - nums[i] in d:
print(d)
return [d[target-nums[i]],i]
d[nums[i]] = i
return -1
L = [8,6,11,3]
print(two_sum(L,9))
L2 = [2,5,3,7,4]
print(two_sum(L2,10))
print(two_sum([3, 3], 6))
###Output
{3: 0}
[0, 1]
|
Copy_of_dmn_torch.ipynb | ###Markdown
###Code
%env PYTHONPATH=
%%bash
MINICONDA_INSTALLER_SCRIPT=Miniconda3-4.5.4-Linux-x86_64.sh
MINICONDA_PREFIX=/usr/local
wget https://repo.continuum.io/miniconda/$MINICONDA_INSTALLER_SCRIPT
chmod +x $MINICONDA_INSTALLER_SCRIPT
./$MINICONDA_INSTALLER_SCRIPT -b -f -p $MINICONDA_PREFIX
import sys
_ = (sys.path
.append("/usr/local/lib/python3.6/site-packages"))
!python --version
!conda install pytorch=0.1.12 -c pytorch
!git clone https://github.com/dandelin/Dynamic-memory-networks-plus-Pytorch.git
%cd Dynamic-memory-networks-plus-Pytorch
!python --version
%%shell
chmod +x fetch_data.sh
./fetch_data.sh
from babi_loader import BabiDataset, pad_collate, get_raw_babi, get_unindexed_qa
import os
import torch
import torch.nn as nn
import torch.nn.functional as F
import torch.nn.init as init
from torch.autograd import Variable
from torch.utils.data import DataLoader
import random
def position_encoding(embedded_sentence):
'''
embedded_sentence.size() -> (#batch, #sentence, #token, #embedding)
l.size() -> (#sentence, #embedding)
output.size() -> (#batch, #sentence, #embedding)
'''
_, sent, slen, elen = embedded_sentence.size()
l = [[(1 - s/(slen-1)) - (e/(elen-1)) * (1 - 2*s/(slen-1)) for e in range(elen)] for s in range(slen)]
l = torch.FloatTensor(l)
l = l.unsqueeze(0) # for #batch
l = l.unsqueeze(1) # for #sen
l = l.expand_as(embedded_sentence)
weighted = embedded_sentence * Variable(l.cuda())
return torch.sum(weighted, dim=2).squeeze(2) # sum with tokens
class AttentionGRUCell(nn.Module):
def __init__(self, input_size, hidden_size):
super(AttentionGRUCell, self).__init__()
self.hidden_size = hidden_size
self.Wr = nn.Linear(input_size, hidden_size)
init.xavier_normal_(self.Wr.state_dict()['weight'])
self.Ur = nn.Linear(hidden_size, hidden_size)
init.xavier_normal_(self.Ur.state_dict()['weight'])
self.W = nn.Linear(input_size, hidden_size)
init.xavier_normal_(self.W.state_dict()['weight'])
self.U = nn.Linear(hidden_size, hidden_size)
init.xavier_normal_(self.U.state_dict()['weight'])
def forward(self, fact, C, g):
'''
fact.size() -> (#batch, #hidden = #embedding)
c.size() -> (#hidden, ) -> (#batch, #hidden = #embedding)
r.size() -> (#batch, #hidden = #embedding)
h_tilda.size() -> (#batch, #hidden = #embedding)
g.size() -> (#batch, )
'''
r = torch.sigmoid(self.Wr(fact) + self.Ur(C))
h_tilda = torch.tanh(self.W(fact) + r * self.U(C))
g = g.unsqueeze(1).expand_as(h_tilda)
h = g * h_tilda + (1 - g) * C
return h
class AttentionGRU(nn.Module):
def __init__(self, input_size, hidden_size):
super(AttentionGRU, self).__init__()
self.hidden_size = hidden_size
self.AGRUCell = AttentionGRUCell(input_size, hidden_size)
def forward(self, facts, G):
'''
facts.size() -> (#batch, #sentence, #hidden = #embedding)
fact.size() -> (#batch, #hidden = #embedding)
G.size() -> (#batch, #sentence)
g.size() -> (#batch, )
C.size() -> (#batch, #hidden)
'''
batch_num, sen_num, embedding_size = facts.size()
C = Variable(torch.zeros(self.hidden_size)).cuda()
for sid in range(sen_num):
fact = facts[:, sid, :]
g = G[:, sid]
if sid == 0:
C = C.unsqueeze(0).expand_as(fact)
C = self.AGRUCell(fact, C, g)
return C
class EpisodicMemory(nn.Module):
def __init__(self, hidden_size):
super(EpisodicMemory, self).__init__()
self.AGRU = AttentionGRU(hidden_size, hidden_size)
self.z1 = nn.Linear(4 * hidden_size, hidden_size)
self.z2 = nn.Linear(hidden_size, 1)
self.next_mem = nn.Linear(3 * hidden_size, hidden_size)
init.xavier_normal_(self.z1.state_dict()['weight'])
init.xavier_normal_(self.z2.state_dict()['weight'])
init.xavier_normal_(self.next_mem.state_dict()['weight'])
def make_interaction(self, facts, questions, prevM):
'''
facts.size() -> (#batch, #sentence, #hidden = #embedding)
questions.size() -> (#batch, 1, #hidden)
prevM.size() -> (#batch, #sentence = 1, #hidden = #embedding)
z.size() -> (#batch, #sentence, 4 x #embedding)
G.size() -> (#batch, #sentence)
'''
batch_num, sen_num, embedding_size = facts.size()
questions = questions.expand_as(facts)
prevM = prevM.expand_as(facts)
z = torch.cat([
facts * questions,
facts * prevM,
torch.abs(facts - questions),
torch.abs(facts - prevM)
], dim=2)
z = z.view(-1, 4 * embedding_size)
G = torch.tanh(self.z1(z))
G = self.z2(G)
G = G.view(batch_num, -1)
G = F.softmax(G)
return G
def forward(self, facts, questions, prevM):
'''
facts.size() -> (#batch, #sentence, #hidden = #embedding)
questions.size() -> (#batch, #sentence = 1, #hidden)
prevM.size() -> (#batch, #sentence = 1, #hidden = #embedding)
G.size() -> (#batch, #sentence)
C.size() -> (#batch, #hidden)
concat.size() -> (#batch, 3 x #embedding)
'''
G = self.make_interaction(facts, questions, prevM)
C = self.AGRU(facts, G)
concat = torch.cat([prevM.squeeze(1), C, questions.squeeze(1)], dim=1)
next_mem = F.relu(self.next_mem(concat))
next_mem = next_mem.unsqueeze(1)
return next_mem
class QuestionModule(nn.Module):
def __init__(self, vocab_size, hidden_size):
super(QuestionModule, self).__init__()
self.gru = nn.GRU(hidden_size, hidden_size, batch_first=True)
def forward(self, questions, word_embedding):
'''
questions.size() -> (#batch, #token)
word_embedding() -> (#batch, #token, #embedding)
gru() -> (1, #batch, #hidden)
'''
questions = word_embedding(questions)
_, questions = self.gru(questions)
questions = questions.transpose(0, 1)
return questions
class InputModule(nn.Module):
def __init__(self, vocab_size, hidden_size):
super(InputModule, self).__init__()
self.hidden_size = hidden_size
self.gru = nn.GRU(hidden_size, hidden_size, bidirectional=True, batch_first=True)
for name, param in self.gru.state_dict().items():
if 'weight' in name: init.xavier_normal_(param)
self.dropout = nn.Dropout(0.1)
def forward(self, contexts, word_embedding):
'''
contexts.size() -> (#batch, #sentence, #token)
word_embedding() -> (#batch, #sentence x #token, #embedding)
position_encoding() -> (#batch, #sentence, #embedding)
facts.size() -> (#batch, #sentence, #hidden = #embedding)
'''
batch_num, sen_num, token_num = contexts.size()
contexts = contexts.view(batch_num, -1)
contexts = word_embedding(contexts)
contexts = contexts.view(batch_num, sen_num, token_num, -1)
contexts = position_encoding(contexts)
contexts = self.dropout(contexts)
h0 = Variable(torch.zeros(2, batch_num, self.hidden_size).cuda())
facts, hdn = self.gru(contexts, h0)
facts = facts[:, :, :hidden_size] + facts[:, :, hidden_size:]
return facts
class AnswerModule(nn.Module):
def __init__(self, vocab_size, hidden_size):
super(AnswerModule, self).__init__()
self.z = nn.Linear(2 * hidden_size, vocab_size)
init.xavier_normal_(self.z.state_dict()['weight'])
self.dropout = nn.Dropout(0.1)
def forward(self, M, questions):
M = self.dropout(M)
concat = torch.cat([M, questions], dim=2).squeeze(1)
z = self.z(concat)
return z
class DMNPlus(nn.Module):
def __init__(self, hidden_size, vocab_size, num_hop=3, qa=None):
super(DMNPlus, self).__init__()
self.num_hop = num_hop
self.qa = qa
self.word_embedding = nn.Embedding(vocab_size, hidden_size, padding_idx=0, sparse=True).cuda()
init.uniform_(self.word_embedding.state_dict()['weight'], a=-(3**0.5), b=3**0.5)
self.criterion = nn.CrossEntropyLoss(reduction='sum')
self.input_module = InputModule(vocab_size, hidden_size)
self.question_module = QuestionModule(vocab_size, hidden_size)
self.memory = EpisodicMemory(hidden_size)
self.answer_module = AnswerModule(vocab_size, hidden_size)
def forward(self, contexts, questions):
'''
contexts.size() -> (#batch, #sentence, #token) -> (#batch, #sentence, #hidden = #embedding)
questions.size() -> (#batch, #token) -> (#batch, 1, #hidden)
'''
facts = self.input_module(contexts, self.word_embedding)
questions = self.question_module(questions, self.word_embedding)
M = questions
for hop in range(self.num_hop):
M = self.memory(facts, questions, M)
preds = self.answer_module(M, questions)
return preds
def interpret_indexed_tensor(self, var):
if len(var.size()) == 3:
# var -> n x #sen x #token
for n, sentences in enumerate(var):
for i, sentence in enumerate(sentences):
s = ' '.join([self.qa.IVOCAB[elem.data[0]] for elem in sentence])
print(f'{n}th of batch, {i}th sentence, {s}')
elif len(var.size()) == 2:
# var -> n x #token
for n, sentence in enumerate(var):
s = ' '.join([self.qa.IVOCAB[elem.data[0]] for elem in sentence])
print(f'{n}th of batch, {s}')
elif len(var.size()) == 1:
# var -> n (one token per batch)
for n, token in enumerate(var):
s = self.qa.IVOCAB[token.data[0]]
print(f'{n}th of batch, {s}')
def get_loss(self, contexts, questions, targets):
output = self.forward(contexts, questions)
loss = self.criterion(output, targets)
reg_loss = 0
for param in self.parameters():
reg_loss += 0.001 * torch.sum(param * param)
preds = F.softmax(output)
_, pred_ids = torch.max(preds, dim=1)
corrects = (pred_ids.data == answers.data)
#print(corrects)
acc = torch.mean(corrects.float())
return loss + reg_loss, acc, corrects
if __name__ == '__main__':
task_id = input("Enter the task: ")
print("Task:", task_id)
print()
dset = BabiDataset(task_id)
train, test = get_raw_babi(task_id)
unindexed = get_unindexed_qa(test)
vocab_size = len(dset.QA.VOCAB)
hidden_size = 80
model = DMNPlus(hidden_size, vocab_size, num_hop=3, qa=dset.QA)
model.cuda()
model_path = "pretrained_models/task"
model.load_state_dict(torch.load(model_path + str(task_id) + ".pth"))
model.eval()
dset.set_mode('test')
test_loader = DataLoader(dset, batch_size=100, shuffle=False, collate_fn=pad_collate)
test_acc = 0
cnt = 0
for batch_idx, data in enumerate(test_loader):
contexts, questions, answers = data
batch_size = contexts.size()[0]
contexts = Variable(contexts.long().cuda())
questions = Variable(questions.long().cuda())
answers = Variable(answers.cuda())
_, acc, corrects = model.get_loss(contexts, questions, answers)
test_acc += acc * batch_size
cnt += batch_size
#print(f'[Test] Accuracy : {test_acc / cnt: {5}.{4}}')
for i in range(len(corrects)):
item = unindexed[i]
print("Context:")
for c in item['C']:
print(c)
print("Question:")
print(item['Q'])
print("Expected Answer:")
print(item['A'])
print("Prediction:")
if corrects[i] == True:
print(item['A'])
else:
true_ans = item['A']
possible_ans = list(set([it['A'] for it in unindexed]))
possible_ans.remove(true_ans)
ans = random.choice(possible_ans)
print(ans)
print('################################')
print(f'[Test] Accuracy : {test_acc / cnt: {5}.{4}}')
print('################################')
from google.colab import drive
drive.mount('/content/drive')
###Output
Mounted at /content/drive
|
testing/western_cropmask/1_Extract_training_data.ipynb | ###Markdown
Extracting training data from the ODC* **Products used:** [gm_s2_semiannual](https://explorer.digitalearth.africa/gm_s2_semiannual) DescriptionThis notebook will extract training data over Western Africa using geometries within a shapefile (or geojson). To do this, we rely on a custom `deafrica-sandbox-notebooks` function called `collect_training_data`, contained within the [deafrica_tools.classification](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/blob/minty-fresh-sandbox/Tools/deafrica_tools/classification.py) script.1. Import, and preview our training data contained in the file: `'data/ceo_td_polys.geojson'`2. Extract training data from the datacube using a custom defined feature layer function that we can pass to `collect_training_data`. The training data function is stored in the python file `feature_layer_functions.py` - the functions are stored in a seperate file simply to keep this notebook tidy. - **The features used to create the cropland mask are as follows:** - For two seasons, January to June, and July to Decemeber: - A geomedian composite of nine Sentinel-2 spectral bands - Three measures of median absolute deviation - NDVI, MNDWI, and LAI - Cumulative Rainfall from CHIRPS - Slope from SRTM (not seasonal, obviously) 3. Separate the coordinate values in the returned training data from step 2, and export the coordinates as a text file.4. Export the remaining training data (features other than coordinates) to disk as a text file for use in subsequent scripts*** Getting startedTo run this analysis, run all the cells in the notebook, starting with the "Load packages" cell. Load packages
###Code
%matplotlib inline
import os
import warnings
import datacube
import numpy as np
import xarray as xr
import subprocess as sp
import geopandas as gpd
from odc.io.cgroups import get_cpu_quota
from datacube.utils.geometry import assign_crs
from datacube.utils.rio import configure_s3_access
configure_s3_access(aws_unsigned=True, cloud_defaults=True)
from deafrica_tools.plotting import map_shapefile
from deafrica_tools.classification import collect_training_data
#import the custom feature layer functions
from feature_layer_functions import gm_mads_two_seasons_training
###Output
/env/lib/python3.6/site-packages/geopandas/_compat.py:88: UserWarning: The Shapely GEOS version (3.7.2-CAPI-1.11.0 ) is incompatible with the GEOS version PyGEOS was compiled with (3.9.1-CAPI-1.14.2). Conversions between both will be slow.
shapely_geos_version, geos_capi_version_string
###Markdown
Analysis parameters* `path`: The path to the input shapefile from which we will extract training data.* `field`: This is the name of column in your shapefile attribute table that contains the class labels. **The class labels must be integers**
###Code
path = 'data/Western_training_data_20210609.geojson'
output_suffix = '20210609'
field = 'Class'
###Output
_____no_output_____
###Markdown
Automatically find the number of cpus> **Note**: With supervised classification, its common to have many, many labelled geometries in the training data. `collect_training_data` can parallelize across the geometries in order to speed up the extracting of training data. Setting `ncpus>1` will automatically trigger the parallelization, however, its best to set `ncpus=1` to begin with to assist with debugging before triggering the parallelization.
###Code
ncpus=round(get_cpu_quota())
print('ncpus = '+str(ncpus))
###Output
ncpus = 31
###Markdown
Load & preview polygon dataWe can load and preview our input data shapefile using `geopandas`. The shapefile should contain a column with class labels (e.g. 'class'). These labels will be used to train our model. > Remember, the class labels **must** be represented by `integers`.
###Code
# Load input data shapefile
input_data = gpd.read_file(path)
# Plot first five rows
input_data.head()
# Plot training data in an interactive map
# map_shapefile(input_data, attribute=field)
###Output
_____no_output_____
###Markdown
Now, we can pass this shapefile to `collect_training_data`. For each of the geometries in our shapefile we will extract features in accordance with the function `feature_layer_functions.gm_mads_two_seasons_training`. These will include:For two seasons, January to June, and July to Decemeber:- A geomedian composite of nine Sentinel-2 spectral bands- Three measures of median absolute deviation- NDVI, MNDWI, and LAI- Cumulative Rainfall from the CHIRPS- Slope from SRTM First, we need to set up a few extra inputs for `collect_training_data` and the datacube. See the function docs [here](https://github.com/digitalearthafrica/deafrica-sandbox-notebooks/blob/03b7b41d5f6526ff3f33618f7a0b48c0d10a155f/Scripts/deafrica_classificationtools.pyL650) for more information on these parameters.
###Code
#set up our inputs to collect_training_data
zonal_stats = 'median'
return_coords = True
# Set up the inputs for the ODC query
time = ('2019')
measurements = [
"blue",
"green",
"red",
"nir",
"swir_1",
"swir_2",
"red_edge_1",
"red_edge_2",
"red_edge_3",
"bcdev",
"edev",
"sdev"
]
resolution = (-10, 10)
output_crs = 'epsg:6933'
#generate a new datacube query object
query = {
'time': time,
'measurements': measurements,
'resolution': resolution,
'output_crs': output_crs,
'resampling': 'bilinear'
}
###Output
_____no_output_____
###Markdown
Extract training data> Remember, if running this function for the first time, its advisable to set `ncpus=1` to assist with debugging before triggering the parallelization (which won't return errors if something is not working correctly). You can also limit the number of polygons to run for the first time by passing in `gdf=input_data[0:5]`, for example.
###Code
%%time
warnings.filterwarnings("ignore")
column_names, model_input = collect_training_data(
gdf=input_data,
dc_query=query,
ncpus=ncpus,
return_coords=return_coords,
field=field,
zonal_stats=zonal_stats,
fail_threshold=0.01,
feature_func=gm_mads_two_seasons_training
)
print(column_names)
print('')
print(np.array_str(model_input, precision=2, suppress_small=True))
###Output
['Class', 'blue_S1', 'green_S1', 'red_S1', 'nir_S1', 'swir_1_S1', 'swir_2_S1', 'red_edge_1_S1', 'red_edge_2_S1', 'red_edge_3_S1', 'bcdev_S1', 'edev_S1', 'sdev_S1', 'NDVI_S1', 'LAI_S1', 'MNDWI_S1', 'rain_S1', 'blue_S2', 'green_S2', 'red_S2', 'nir_S2', 'swir_1_S2', 'swir_2_S2', 'red_edge_1_S2', 'red_edge_2_S2', 'red_edge_3_S2', 'bcdev_S2', 'edev_S2', 'sdev_S2', 'NDVI_S2', 'LAI_S2', 'MNDWI_S2', 'rain_S2', 'slope', 'x_coord', 'y_coord']
[[ 1. 0.06 0.09 ... 2.43 252645. 1146595. ]
[ 1. 0.09 0.12 ... 2.12 253635. 1146530. ]
[ 1. 0.08 0.11 ... 8.37 254505. 1146445. ]
...
[ 0. 0.08 0.1 ... 7.21 -843915. 1401845. ]
[ 1. 0.13 0.17 ... 6.33 597875. 1238515. ]
[ 0. 0.1 0.12 ... 3.95 1325945. 1435315. ]]
###Markdown
Seperate the coordinatesBy setting `return_coords=True` in the `collect_training_data` function, our training data now has two extra columns called `x_coord` and `y_coord`. We need to seperate these from our training dataset as they will not be used to train the machine learning model. Instead, these variables will be used to help conduct Spatial K-fold Cross validation (SKVC) in the notebook `3_Train_fit_evaluate_classifier`. For more information on why this is important, see this [article](https://www.tandfonline.com/doi/abs/10.1080/13658816.2017.1346255?journalCode=tgis20).
###Code
coordinates_filename = "results/training_data/western_training_data_coordinates_"+output_suffix+".txt"
coord_variables = ['x_coord', 'y_coord']
model_col_indices = [column_names.index(var_name) for var_name in coord_variables]
np.savetxt(coordinates_filename, model_input[:, model_col_indices])
###Output
_____no_output_____
###Markdown
Export training dataOnce we've collected all the training data we require, we can write the data to disk. This will allow us to import the data in the next step(s) of the workflow.
###Code
#set the name and location of the output file
output_file = "results/training_data/western_training_data_"+output_suffix+".txt"
#grab all columns except the x-y coords
model_col_indices = [column_names.index(var_name) for var_name in column_names[0:-2]]
#Export files to disk
np.savetxt(output_file, model_input[:, model_col_indices], header=" ".join(column_names[0:-2]), fmt="%4f")
###Output
_____no_output_____ |
Notebooks/Activation Test MEA Standard Protocol.ipynb | ###Markdown
Activation Test MEA Standard Protocol
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
from functions import *
path = os.path.join("../","Activation Test MEA Standard Protocol")
###Output
_____no_output_____
###Markdown
1- Impedance : End of Each Activation Set
###Code
data = load_data(os.path.join(path,"1.csv"),set_flag=True)
impedance_plot1(data,set_index=1)
impedance_plot1(data,set_index=2)
###Output
_____no_output_____
###Markdown
2- Impedance : Various Voltages (1)
###Code
data = load_data(os.path.join(path,"2.csv"))
impedance_plot2(data,RH=30,P=5)
impedance_plot2(data,RH=30,P=25)
impedance_plot2(data,RH=30,V=0.3)
###Output
_____no_output_____
###Markdown
3- Impedance : Various Voltages (2)
###Code
data = load_data(os.path.join(path,"3.csv"))
impedance_plot2(data,RH=30,P=5)
impedance_plot2(data,RH=30,P=25)
impedance_plot2(data,RH=30,V=0.7)
###Output
_____no_output_____
###Markdown
4- Polarization : End of Activation Procedure (1)
###Code
data = load_data(os.path.join(path,"4.csv"))
polarization_plot1(data,RH=30,P=5)
polarization_plot1(data,RH=30,P=25)
###Output
_____no_output_____
###Markdown
5- Polarization : End of Activation Procedure (2)
###Code
data = load_data(os.path.join(path,"5.csv"))
polarization_plot1(data,RH=30,P=5)
polarization_plot1(data,RH=30,P=25)
###Output
_____no_output_____
###Markdown
6- Polarization : End of Each Activation Set
###Code
data = load_data(os.path.join(path,"6.csv"))
polarization_plot2(data,set_index=2)
###Output
_____no_output_____
###Markdown
Activation Test MEA Standard Protocol
###Code
import os
import numpy as np
import matplotlib.pyplot as plt
from functions import *
path = os.path.join("../","Activation Test MEA Standard Protocol")
###Output
_____no_output_____
###Markdown
1- Impedance : End of Each Activation Set
###Code
data = load_data(os.path.join(path,"1.csv"),set_flag=True)
impedance_plot1(data,set_index=1)
impedance_plot1(data,set_index=2)
###Output
_____no_output_____
###Markdown
2- Impedance : Various Voltages (1)
###Code
data = load_data(os.path.join(path,"2.csv"))
impedance_plot2(data,RH=30,P=5)
impedance_plot2(data,V=0.3,P=25)
impedance_plot2(data,RH=30,V=0.3)
###Output
_____no_output_____
###Markdown
3- Impedance : Various Voltages (2)
###Code
data = load_data(os.path.join(path,"3.csv"))
impedance_plot2(data,RH=30,P=5)
impedance_plot2(data,V=0.3,P=25)
impedance_plot2(data,RH=30,V=0.7)
###Output
_____no_output_____
###Markdown
4- Polarization : End of Activation Procedure (1)
###Code
data = load_data(os.path.join(path,"4.csv"))
polarization_plot1(data,RH=30,P=5)
polarization_plot1(data,RH=30,P=25)
###Output
_____no_output_____
###Markdown
5- Polarization : End of Activation Procedure (2)
###Code
data = load_data(os.path.join(path,"5.csv"))
polarization_plot1(data,RH=30,P=5)
polarization_plot1(data,RH=30,P=25)
###Output
_____no_output_____
###Markdown
6- Polarization : End of Each Activation Set
###Code
data = load_data(os.path.join(path,"6.csv"))
polarization_plot2(data,set_index=2)
polarization_plot2(data,set_index=[1,2,3,4,5])
###Output
_____no_output_____ |
understanding-image-classification-main/code/Method 2/CNN_Image_Classification.ipynb | ###Markdown
Function of loading dataset
###Code
def load_train(train_path, image_size, classes):
images = []
labels = []
ids = []
cls = []
print('Reading training images')
for fld in classes: # assuming data directory has a separate folder for each class, and that each folder is named after the class
index = classes.index(fld)
print('Loading {} files (Index: {})'.format(fld, index))
path = os.path.join(train_path, fld, '*g')
files = glob.glob(path)
for fl in files:
image = cv2.imread(fl)
image = cv2.resize(image, (image_size, image_size), interpolation = cv2.INTER_LINEAR)
images.append(image)
label = np.zeros(len(classes))
label[index] = 1.0
labels.append(label)
flbase = os.path.basename(fl)
ids.append(flbase)
cls.append(fld)
images = np.array(images)
labels = np.array(labels)
ids = np.array(ids)
cls = np.array(cls)
return images, labels, ids, cls
def load_test(test_path, image_size):
path = os.path.join(test_path, '*g')
files = sorted(glob.glob(path))
X_test = []
X_test_id = []
print("Reading test images")
for fl in files:
flbase = os.path.basename(fl)
img = cv2.imread(fl)
img = cv2.resize(img, (image_size, image_size), interpolation = cv2.INTER_LINEAR)
X_test.append(img)
X_test_id.append(flbase)
### because we're not creating a DataSet object for the test images, normalization happens here
X_test = np.array(X_test, dtype=np.uint8)
X_test = X_test.astype('float32')
X_test = X_test / 255
return X_test, X_test_id
class DataSet(object):
def __init__(self, images, labels, ids, cls):
"""Construct a DataSet. one_hot arg is used only if fake_data is true."""
self._num_examples = images.shape[0]
# Convert shape from [num examples, rows, columns, depth]
# to [num examples, rows*columns] (assuming depth == 1)
# Convert from [0, 255] -> [0.0, 1.0].
images = images.astype(np.float32)
images = np.multiply(images, 1.0 / 255.0)
self._images = images
self._labels = labels
self._ids = ids
self._cls = cls
self._epochs_completed = 0
self._index_in_epoch = 0
@property
def images(self):
return self._images
@property
def labels(self):
return self._labels
@property
def ids(self):
return self._ids
@property
def cls(self):
return self._cls
@property
def num_examples(self):
return self._num_examples
@property
def epochs_completed(self):
return self._epochs_completed
def next_batch(self, batch_size):
"""Return the next `batch_size` examples from this data set."""
start = self._index_in_epoch
self._index_in_epoch += batch_size
if self._index_in_epoch > self._num_examples:
# Finished epoch
self._epochs_completed += 1
# # Shuffle the data (maybe)
# perm = np.arange(self._num_examples)
# np.random.shuffle(perm)
# self._images = self._images[perm]
# self._labels = self._labels[perm]
# Start next epoch
start = 0
self._index_in_epoch = batch_size
assert batch_size <= self._num_examples
end = self._index_in_epoch
return self._images[start:end], self._labels[start:end], self._ids[start:end], self._cls[start:end]
def read_train_sets(train_path, image_size, classes, validation_size=0):
class DataSets(object):
pass
data_sets = DataSets()
images, labels, ids, cls = load_train(train_path, image_size, classes)
images, labels, ids, cls = shuffle(images, labels, ids, cls) # shuffle the data
if isinstance(validation_size, float):
validation_size = int(validation_size * images.shape[0])
validation_images = images[:validation_size]
validation_labels = labels[:validation_size]
validation_ids = ids[:validation_size]
validation_cls = cls[:validation_size]
train_images = images[validation_size:]
train_labels = labels[validation_size:]
train_ids = ids[validation_size:]
train_cls = cls[validation_size:]
data_sets.train = DataSet(train_images, train_labels, train_ids, train_cls)
data_sets.valid = DataSet(validation_images, validation_labels, validation_ids, validation_cls)
return data_sets
def read_test_set(test_path, image_size):
images, ids = load_test(test_path, image_size)
return images, ids
###Output
_____no_output_____
###Markdown
Configuration and Hyperparameters
###Code
# Convolutional Layer 1.
filter_size1 = 5
num_filters1 = 64
# Convolutional Layer 2.
filter_size2 = 3
num_filters2 = 64
# # Convolutional Layer 3.
# filter_size3 = 5
# num_filters3 = 128
# Fully-connected layer 1.
fc1_size = 128 # Number of neurons in fully-connected layer.
# Fully-connected layer 2.
fc2_size = 128 # Number of neurons in fully-connected layer.
# Number of color channels for the images: 1 channel for gray-scale.
num_channels = 3
# image dimensions (only squares for now)
img_size = 64
# Size of image when flattened to a single dimension
img_size_flat = img_size * img_size * num_channels
# Tuple with height and width of images used to reshape arrays.
img_shape = (img_size, img_size)
# class info
classes = ['Sphynx','Siamese','Ragdoll',
'Persian','Maine_Coon','British_shorthair','Bombay','Birman','Bengal','Abyssinian']
# classes = ['Sphynx','Siamese',
# 'Persian','Maine_Coon','British_shorthair']
num_classes = len(classes)
# batch size
batch_size = 32
# validation split
validation_size = .2
# how long to wait after validation loss stops improving before terminating training
early_stopping = None # use None if you don't want to implement early stoping
train_path = 'dataset'
# test_path = 'test'
checkpoint_dir = "ckpoint"
# load training dataset
data = read_train_sets(train_path, img_size, classes, validation_size=validation_size)
# test_images, test_ids = read_test_set(test_path, img_size)
print("Size of:")
print("- Training-set:\t\t{}".format(len(data.train.labels)))
# print("- Test-set:\t\t{}".format(len(test_images)))
print("- Validation:\t{}".format(len(data.valid.labels)))
# print(images)
###Output
Size of:
- Training-set: 1598
- Validation: 399
###Markdown
Helper-function for plotting images
###Code
def plot_images(images, cls_true, cls_pred=None):
if len(images) == 0:
print("no images to show")
return
else:
random_indices = random.sample(range(len(images)), min(len(images), 9))
images, cls_true = zip(*[(images[i], cls_true[i]) for i in random_indices])
# Create figure with 3x3 sub-plots.
fig, axes = plt.subplots(3, 3)
fig.subplots_adjust(hspace=0.3, wspace=0.3)
for i, ax in enumerate(axes.flat):
# Plot image.
ax.imshow(images[i].reshape(img_size, img_size, num_channels))
# Show true and predicted classes.
if cls_pred is None:
xlabel = "True: {0}".format(cls_true[i])
else:
xlabel = "True: {0}, Pred: {1}".format(cls_true[i], cls_pred[i])
# Show the classes as the label on the x-axis.
ax.set_xlabel(xlabel)
# Remove ticks from the plot.
ax.set_xticks([])
ax.set_yticks([])
# Ensure the plot is shown correctly with multiple plots
# in a single Notebook cell.
plt.show()
# Get some random images and their labels from the train set.
images, cls_true = data.train.images, data.train.cls
# Plot the images and labels using our helper-function above.
plot_images(images=images, cls_true=cls_true)
###Output
_____no_output_____
###Markdown
TensorFlow Graph Helper-functions for creating new variables
###Code
def new_weights(shape):
return tf.Variable(tf.truncated_normal(shape, stddev=0.05))
def new_biases(length):
return tf.Variable(tf.constant(0.05, shape=[length]))
###Output
_____no_output_____
###Markdown
Convolutional Layer
###Code
def new_conv_layer(input, # The previous layer.
num_input_channels, # Num. channels in prev. layer.
filter_size, # Width and height of each filter.
num_filters, # Number of filters.
use_pooling=True): # Use 2x2 max-pooling.
# Shape of the filter-weights for the convolution.
# This format is determined by the TensorFlow API.
shape = [filter_size, filter_size, num_input_channels, num_filters]
# Create new weights aka. filters with the given shape.
weights = new_weights(shape=shape)
# Create new biases, one for each filter.
biases = new_biases(length=num_filters)
# Create the TensorFlow operation for convolution.
# Note the strides are set to 1 in all dimensions.
# The first and last stride must always be 1,
# because the first is for the image-number and
# the last is for the input-channel.
# But e.g. strides=[1, 2, 2, 1] would mean that the filter
# is moved 2 pixels across the x- and y-axis of the image.
# The padding is set to 'SAME' which means the input image
# is padded with zeroes so the size of the output is the same.
layer = tf.nn.conv2d(input=input,
filter=weights,
strides=[1, 1, 1, 1],
padding='SAME')
# Add the biases to the results of the convolution.
# A bias-value is added to each filter-channel.
layer += biases
# Use pooling to down-sample the image resolution?
if use_pooling:
# This is 2x2 max-pooling, which means that we
# consider 2x2 windows and select the largest value
# in each window. Then we move 2 pixels to the next window.
layer = tf.nn.max_pool(value=layer,
ksize=[1, 2, 2, 1],
strides=[1, 2, 2, 1],
padding='SAME')
# Rectified Linear Unit (ReLU).
# It calculates max(x, 0) for each input pixel x.
# This adds some non-linearity to the formula and allows us
# to learn more complicated functions.
layer = tf.nn.relu(layer)
# Note that ReLU is normally executed before the pooling,
# but since relu(max_pool(x)) == max_pool(relu(x)) we can
# save 75% of the relu-operations by max-pooling first.
# We return both the resulting layer and the filter-weights
# because we will plot the weights later.
return layer, weights
###Output
_____no_output_____
###Markdown
Flattening a layer
###Code
def flatten_layer(layer):
# Get the shape of the input layer.
layer_shape = layer.get_shape()
# The shape of the input layer is assumed to be:
# layer_shape == [num_images, img_height, img_width, num_channels]
# The number of features is: img_height * img_width * num_channels
# We can use a function from TensorFlow to calculate this.
num_features = layer_shape[1:4].num_elements()
# Reshape the layer to [num_images, num_features].
# Note that we just set the size of the second dimension
# to num_features and the size of the first dimension to -1
# which means the size in that dimension is calculated
# so the total size of the tensor is unchanged from the reshaping.
layer_flat = tf.reshape(layer, [-1, num_features])
# The shape of the flattened layer is now:
# [num_images, img_height * img_width * num_channels]
# Return both the flattened layer and the number of features.
return layer_flat, num_features
###Output
_____no_output_____
###Markdown
Fully-Connected Layer
###Code
def new_fc_layer(input, # The previous layer.
num_inputs, # Num. inputs from prev. layer.
num_outputs, # Num. outputs.
use_relu=True): # Use Rectified Linear Unit (ReLU)?
# Create new weights and biases.
weights = new_weights(shape=[num_inputs, num_outputs])
biases = new_biases(length=num_outputs)
# Calculate the layer as the matrix multiplication of
# the input and weights, and then add the bias-values.
layer = tf.matmul(input, weights) + biases
# Use ReLU?
if use_relu:
layer = tf.nn.relu(layer)
return layer
###Output
_____no_output_____
###Markdown
Placeholder variables
###Code
x = tf.placeholder(tf.float32, shape=[None, img_size_flat], name='x')
x_image = tf.reshape(x, [-1, img_size, img_size, num_channels])
y_true = tf.placeholder(tf.float32, shape=[None, num_classes], name='y_true')
y_true_cls = tf.argmax(y_true, dimension=1)
###Output
_____no_output_____
###Markdown
Convolutional Layer 1
###Code
layer_conv1, weights_conv1 = \
new_conv_layer(input=x_image,
num_input_channels=num_channels,
filter_size=filter_size1,
num_filters=num_filters1,
use_pooling=True)
layer_conv1
###Output
_____no_output_____
###Markdown
Convolutional Layers 2 and 3
###Code
layer_conv2, weights_conv2 = \
new_conv_layer(input=layer_conv1,
num_input_channels=num_filters1,
filter_size=filter_size2,
num_filters=num_filters2,
use_pooling=True)
# layer_conv3, weights_conv3 = \
# new_conv_layer(input=layer_conv2,
# num_input_channels=num_filters2,
# filter_size=filter_size3,
# num_filters=num_filters3,
# use_pooling=True)
# print(layer_conv3, layer_conv2)
###Output
_____no_output_____
###Markdown
Flatten Layer
###Code
# layer_flat, num_features = flatten_layer(layer_conv3)
# print(layer_flat, num_features)
layer_flat, num_features = flatten_layer(layer_conv2)
print(layer_flat, num_features)
###Output
(<tf.Tensor 'Reshape_14:0' shape=(?, 16384) dtype=float32>, 16384)
###Markdown
Fully-Connected Layer 1
###Code
layer_fc1 = new_fc_layer(input=layer_flat,
num_inputs=num_features,
num_outputs=fc1_size,
use_relu=True)
layer_fc1
###Output
_____no_output_____
###Markdown
Fully-Connected Layer 2
###Code
layer_fc2 = new_fc_layer(input=layer_fc1,
num_inputs=fc1_size,
num_outputs=num_classes,
use_relu=False)
layer_fc2
###Output
_____no_output_____
###Markdown
Predicted Class
###Code
y_pred = tf.nn.softmax(layer_fc2)
y_pred_cls = tf.argmax(y_pred, dimension=1)
###Output
_____no_output_____
###Markdown
Cost-function to be optimized
###Code
cross_entropy = tf.nn.softmax_cross_entropy_with_logits(logits=layer_fc2,
labels=y_true)
cost = tf.reduce_mean(cross_entropy)
optimizer = tf.train.AdamOptimizer(learning_rate=1e-4).minimize(cost)
###Output
_____no_output_____
###Markdown
Performance Measures
###Code
correct_prediction = tf.equal(y_pred_cls, y_true_cls)
accuracy = tf.reduce_mean(tf.cast(correct_prediction, tf.float32))
###Output
_____no_output_____
###Markdown
TensorFlow Run
###Code
session = tf.Session()
session.run(tf.global_variables_initializer())
train_batch_size = batch_size
# def print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):
# # Calculate the accuracy on the training-set.
# acc = session.run(accuracy, feed_dict=feed_dict_train)
# val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
# msg = "Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}"
# print(msg.format(epoch + 1, acc, val_acc, val_loss))
def print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss):
# Calculate the accuracy on the training-set.
acc = session.run(accuracy, feed_dict=feed_dict_train)
val_acc = session.run(accuracy, feed_dict=feed_dict_validate)
msg = "Epoch {0} --- Training Accuracy: {1:>6.1%}, Validation Accuracy: {2:>6.1%}, Validation Loss: {3:.3f}"
print(msg.format(epoch + 1, acc, val_acc, val_loss))
# Counter for total number of iterations performed so far.
total_iterations = 0
def optimize(num_iterations):
# Ensure we update the global variable rather than a local copy.
global total_iterations
# Start-time used for printing time-usage below.
start_time = time.time()
best_val_loss = float("inf")
patience = 0
for i in range(total_iterations,
total_iterations + num_iterations):
# Get a batch of training examples.
# x_batch now holds a batch of images and
# y_true_batch are the true labels for those images.
x_batch, y_true_batch, _, cls_batch = data.train.next_batch(train_batch_size)
x_valid_batch, y_valid_batch, _, valid_cls_batch = data.valid.next_batch(train_batch_size)
# Convert shape from [num examples, rows, columns, depth]
# to [num examples, flattened image shape]
x_batch = x_batch.reshape(train_batch_size, img_size_flat)
x_valid_batch = x_valid_batch.reshape(train_batch_size, img_size_flat)
# Put the batch into a dict with the proper names
# for placeholder variables in the TensorFlow graph.
feed_dict_train = {x: x_batch,
y_true: y_true_batch}
feed_dict_validate = {x: x_valid_batch,
y_true: y_valid_batch}
# Run the optimizer using this batch of training data.
# TensorFlow assigns the variables in feed_dict_train
# to the placeholder variables and then runs the optimizer.
session.run(optimizer, feed_dict=feed_dict_train)
# Print status at end of each epoch (defined as full pass through training dataset).
if i % int(data.train.num_examples/batch_size) == 0:
val_loss = session.run(cost, feed_dict=feed_dict_validate)
epoch = int(i / int(data.train.num_examples/batch_size))
print_progress(epoch, feed_dict_train, feed_dict_validate, val_loss)
if early_stopping:
if val_loss < best_val_loss:
best_val_loss = val_loss
patience = 0
else:
patience += 1
if patience == early_stopping:
break
# Update the total number of iterations performed.
total_iterations += num_iterations
# Ending time.
end_time = time.time()
# Difference between start and end-times.
time_dif = end_time - start_time
# Print the time-usage.
print("Time elapsed: " + str(timedelta(seconds=int(round(time_dif)))))
optimize(num_iterations=10000)
x_test = data.valid.images.reshape(399, img_size_flat)
feed_dict_test = {x: x_test,
y_true: data.valid.labels}
val_loss = session.run(cost, feed_dict=feed_dict_test)
val_acc = session.run(accuracy, feed_dict=feed_dict_test)
msg_test = "Test Accuracy: {0:>6.1%}"
print(msg_test.format(val_acc))
###Output
_____no_output_____ |
docs/_downloads/3ae57bf46d8232ac1fe3efc7d1a6db7c/text_sentiment_ngrams_tutorial.ipynb | ###Markdown
torchtext 라이브러리로 텍스트 분류하기===============================================**번역**: `김강민 `_ , `김진현 `_이 튜토리얼에서는 torchtext 라이브러리를 사용하여 어떻게 텍스트 분류 분석을 위한 데이터셋을 만드는지를 살펴보겠습니다.다음과 같은 내용들을 알게 됩니다: - 반복자(iterator)로 가공되지 않은 데이터(raw data)에 접근하기 - 가공되지 않은 텍스트 문장들을 모델 학습에 사용할 수 있는 ``torch.Tensor`` 로 변환하는 데이터 처리 파이프라인 만들기 - `torch.utils.data.DataLoader `__ 를 사용하여 데이터를 섞고 반복하기(shuffle and iterate) 기초 데이터셋 반복자(raw data iterator)에 접근하기-------------------------------------------------------------torchtext 라이브러리는 가공되지 않은 텍스트 문장들을 만드는(yield) 몇 가지 기초 데이터셋 반복자(raw dataset iterator)를 제공합니다.예를 들어, ``AG_NEWS`` 데이터셋 반복자는 레이블(label)과 문장의 튜플(tuple) 형태로 가공되지 않은 데이터를 만듭니다.
###Code
import torch
from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train')
###Output
_____no_output_____
###Markdown
:: next(train_iter) >>> (3, "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green again.") next(train_iter) >>> (3, 'Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\\which has a reputation for making well-timed and occasionally\\controversial plays in the defense industry, has quietly placed\\its bets on another part of the market.') next(train_iter) >>> (3, "Oil and Economy Cloud Stocks' Outlook (Reuters) Reuters - Soaring crude prices plus worries\\about the economy and the outlook for earnings are expected to\\hang over the stock market next week during the depth of the\\summer doldrums.") 데이터 처리 파이프라인 준비하기---------------------------------어휘집(vocab), 단어 벡터(word vector), 토크나이저(tokenizer)를 포함하여 torchtext 라이브러리의 가장 기본적인 구성요소를 재검토했습니다.이들은 가공되지 않은 텍스트 문자열에 대한 기본적인 데이터 처리 빌딩 블록(data processing building block)입니다.다음은 토크나이저 및 어휘집을 사용한 일반적인 NLP 데이터 처리의 예입니다.첫번째 단계는 가공되지 않은 학습 데이터셋으로 어휘집을 만드는 것입니다.여기에서는 토큰의 목록 또는 반복자를 받는 내장(built-in) 팩토리 함수(factory function) `build_vocab_from_iterator` 를 사용합니다.사용자는 어휘집에 추가할 특수 기호(special symbol) 같은 것들을 전달할 수도 있습니다.
###Code
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
tokenizer = get_tokenizer('basic_english')
train_iter = AG_NEWS(split='train')
def yield_tokens(data_iter):
for _, text in data_iter:
yield tokenizer(text)
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
###Output
_____no_output_____
###Markdown
어휘집 블록(vocabulary block)은 토큰 목록을 정수로 변환합니다.:: vocab(['here', 'is', 'an', 'example']) >>> [475, 21, 30, 5286]토크나이저와 어휘집을 갖춘 텍스트 처리 파이프라인을 준비합니다.텍스트 파이프라인과 레이블(label) 파이프라인은 데이터셋 반복자로부터 얻어온 가공되지 않은 문장 데이터를 처리하기 위해 사용됩니다.
###Code
text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
###Output
_____no_output_____
###Markdown
텍스트 파이프라인은 어휘집에 정의된 룩업 테이블(순람표; lookup table)에 기반하여 텍스트 문장을 정수 목록으로 변환합니다.레이블(label) 파이프라인은 레이블을 정수로 변환합니다. 예를 들어,:: text_pipeline('here is the an example') >>> [475, 21, 2, 30, 5286] label_pipeline('10') >>> 9 데이터 배치(batch)와 반복자 생성하기----------------------------------------`torch.utils.data.DataLoader `__ 를권장합니다. (튜토리얼은 `여기 `__ 있습니다.)이는 ``getitem()`` 과 ``len()`` 프로토콜을 구현한 맵 형태(map-style)의 데이터셋으로 동작하며, 맵(map)처럼 인덱스/키로 데이터 샘플을 얻어옵니다.또한, 셔플(shuffle) 인자를 ``False`` 로 설정하면 반복 가능한(iteratable) 데이터셋처럼 동작합니다.모델로 보내기 전, ``collate_fn`` 함수는 ``DataLoader`` 로부터 생성된 샘플 배치로 동작합니다.``collate_fn`` 의 입력은 ``DataLoader`` 에 배치 크기(batch size)가 있는 배치(batch) 데이터이며,``collate_fn`` 은 이를 미리 선언된 데이터 처리 파이프라인에 따라 처리합니다.``collate_fn`` 이 최상위 수준으로 정의(top level def)되었는지 확인합니다. 이렇게 하면 모든 워커에서 이 함수를 사용할 수 있습니다.아래 예제에서, 주어진(original) 데이터 배치의 텍스트 항목들은 리스트(list)에 담긴(pack) 뒤 ``nn.EmbeddingBag`` 의 입력을 위한 하나의 tensor로 합쳐(concatenate)집니다.오프셋(offset)은 텍스트 tensor에서 개별 시퀀스 시작 인덱스를 표현하기 위한 구분자(delimiter) tensor입니다.레이블(label)은 개별 텍스트 항목의 레이블을 저장하는 tensor입니다.
###Code
from torch.utils.data import DataLoader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def collate_batch(batch):
label_list, text_list, offsets = [], [], [0]
for (_label, _text) in batch:
label_list.append(label_pipeline(_label))
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(device), text_list.to(device), offsets.to(device)
train_iter = AG_NEWS(split='train')
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)
###Output
_____no_output_____
###Markdown
모델 정의하기---------------모델은`nn.EmbeddingBag `__레이어와 분류(classification) 목적을 위한 선형 레이어로 구성됩니다.기본 모드가 "평균(mean)"인 ``nn.EmbeddingBag`` 은 임베딩들의 "가방(bag)"의 평균 값을 계산합니다.이때 텍스트(text) 항목들은 각기 그 길이가 다를 수 있지만, ``nn.EmbeddingBag`` 모듈은 텍스트의 길이를오프셋(offset)으로 저장하고 있으므로 패딩(padding)이 필요하지는 않습니다.덧붙여서, ``nn.EmbeddingBag`` 은 임베딩의 평균을 즉시 계산하기 때문에,tensor들의 시퀀스를 처리할 때 성능 및 메모리 효율성 측면에서의 장점도갖고 있습니다.
###Code
from torch import nn
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
###Output
_____no_output_____
###Markdown
인스턴스 생성하기-----------------``AG_NEWS`` 데이터셋에는 4종류의 레이블이 존재하므로 클래스의 개수도 4개입니다.:: 1 : World (세계) 2 : Sports (스포츠) 3 : Business (경제) 4 : Sci/Tec (과학/기술)임베딩 차원이 64인 모델을 만듭니다.어휘집의 크기(Vocab size)는 어휘집(vocab)의 길이와 같습니다.클래스의 개수는 레이블의 개수와 같습니다.
###Code
train_iter = AG_NEWS(split='train')
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModel(vocab_size, emsize, num_class).to(device)
###Output
_____no_output_____
###Markdown
모델을 학습하고 결과를 평가하는 함수 정의하기---------------------------------------------
###Code
import time
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 500
start_time = time.time()
for idx, (label, text, offsets) in enumerate(dataloader):
optimizer.zero_grad()
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches '
'| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),
total_acc/total_count))
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (label, text, offsets) in enumerate(dataloader):
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc/total_count
###Output
_____no_output_____
###Markdown
데이터셋을 분할하고 모델 수행하기---------------------------------원본 ``AG_NEWS`` 에는 검증용 데이터가 포함되어 있지 않기 때문에, 우리는 학습데이터를 학습 및 검증 데이터로 분할하려 합니다. 이때 데이터를 분할하는비율은 0.95(학습)와 0.05(검증) 입니다. 우리는 여기서 PyTorch의핵심 라이브러리 중 하나인`torch.utils.data.dataset.random_split `__함수를 사용합니다.`CrossEntropyLoss `__기준(criterion)은 각 클래스에 대해 ``nn.LogSoftmax()`` 와 ``nn.NLLLoss()`` 를합쳐놓은 방식입니다.`SGD `__optimizer는 확률적 경사 하강법를 구현해놓은 것입니다. 처음의 학습률은5.0으로 두었습니다. 매 에폭을 진행하면서 학습률을 조절할 때는`StepLR `__을 사용합니다.
###Code
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
# Hyperparameters
EPOCHS = 10 # epoch
LR = 5 # learning rate
BATCH_SIZE = 64 # batch size for training
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = \
random_split(train_dataset, [num_train, len(train_dataset) - num_train])
train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
for epoch in range(1, EPOCHS + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(valid_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print('-' * 59)
print('| end of epoch {:3d} | time: {:5.2f}s | '
'valid accuracy {:8.3f} '.format(epoch,
time.time() - epoch_start_time,
accu_val))
print('-' * 59)
###Output
_____no_output_____
###Markdown
평가 데이터로 모델 평가하기------------------------------- 평가 데이터셋을 통한 결과를 확인합니다...
###Code
print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))
###Output
_____no_output_____
###Markdown
임의의 뉴스로 평가하기----------------------현재까지 최고의 모델로 골프 뉴스를 테스트해보겠습니다.
###Code
ag_news_label = {1: "World",
2: "Sports",
3: "Business",
4: "Sci/Tec"}
def predict(text, text_pipeline):
with torch.no_grad():
text = torch.tensor(text_pipeline(text))
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
model = model.to("cpu")
print("This is a %s news" %ag_news_label[predict(ex_text_str, text_pipeline)])
###Output
_____no_output_____
###Markdown
torchtext 라이브러리로 텍스트 분류하기===============================================**번역**: `김강민 `_ , `김진현 `_이 튜토리얼에서는 torchtext 라이브러리를 사용하여 어떻게 텍스트 분류 분석을 위한 데이터셋을 만드는지를 살펴보겠습니다.다음과 같은 내용들을 알게 됩니다: - 반복자(iterator)로 가공되지 않은 데이터(raw data)에 접근하기 - 가공되지 않은 텍스트 문장들을 모델 학습에 사용할 수 있는 ``torch.Tensor`` 로 변환하는 데이터 처리 파이프라인 만들기 - `torch.utils.data.DataLoader `__ 를 사용하여 데이터를 섞고 반복하기(shuffle and iterate) 기초 데이터셋 반복자(raw data iterator)에 접근하기-------------------------------------------------------------torchtext 라이브러리는 가공되지 않은 텍스트 문장들을 만드는(yield) 몇 가지 기초 데이터셋 반복자(raw dataset iterator)를 제공합니다.예를 들어, ``AG_NEWS`` 데이터셋 반복자는 레이블(label)과 문장의 튜플(tuple) 형태로 가공되지 않은 데이터를 만듭니다.
###Code
import torch
from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train')
###Output
_____no_output_____
###Markdown
:: next(train_iter) >>> (3, "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green again.") next(train_iter) >>> (3, 'Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\\which has a reputation for making well-timed and occasionally\\controversial plays in the defense industry, has quietly placed\\its bets on another part of the market.') next(train_iter) >>> (3, "Oil and Economy Cloud Stocks' Outlook (Reuters) Reuters - Soaring crude prices plus worries\\about the economy and the outlook for earnings are expected to\\hang over the stock market next week during the depth of the\\summer doldrums.") 데이터 처리 파이프라인 준비하기---------------------------------어휘집(vocab), 단어 벡터(word vector), 토크나이저(tokenizer)를 포함하여 torchtext 라이브러리의 가장 기본적인 구성요소를 재검토했습니다.이들은 가공되지 않은 텍스트 문자열에 대한 기본적인 데이터 처리 빌딩 블록(data processing building block)입니다.다음은 토크나이저 및 어휘집을 사용한 일반적인 NLP 데이터 처리의 예입니다.첫번째 단계는 가공되지 않은 학습 데이터셋으로 어휘집을 만드는 것입니다.사용자는 Vocab 클래스의 생성자에 인자를 설정하여 사용자 정의된 어휘집(customized vocab)을 만들 수 있습니다.토큰(token)들의 최소 빈도 ``min_freq`` 에 대한 예시는 아래와 같습니다.
###Code
from torchtext.data.utils import get_tokenizer
from collections import Counter
from torchtext.vocab import Vocab
tokenizer = get_tokenizer('basic_english')
train_iter = AG_NEWS(split='train')
counter = Counter()
for (label, line) in train_iter:
counter.update(tokenizer(line))
vocab = Vocab(counter, min_freq=1)
###Output
_____no_output_____
###Markdown
어휘집 블록(vocabulary block)은 토큰 목록을 정수로 변환합니다.:: [vocab[token] for token in ['here', 'is', 'an', 'example']] >>> [476, 22, 31, 5298]토크나이저와 어휘집을 갖춘 텍스트 처리 파이프라인을 준비합니다.텍스트 파이프라인과 레이블(label) 파이프라인은 데이터셋 반복자로부터 얻어온 가공되지 않은 문장 데이터를 처리하기 위해 사용됩니다.
###Code
text_pipeline = lambda x: [vocab[token] for token in tokenizer(x)]
label_pipeline = lambda x: int(x) - 1
###Output
_____no_output_____
###Markdown
텍스트 파이프라인은 어휘집에 정의된 룩업 테이블(순람표; lookup table)에 기반하여 텍스트 문장을 정수 목록으로 변환합니다.레이블(label) 파이프라인은 레이블을 정수로 변환합니다. 예를 들어,:: text_pipeline('here is the an example') >>> [475, 21, 2, 30, 5286] label_pipeline('10') >>> 9 데이터 배치(batch)와 반복자 생성하기----------------------------------------`torch.utils.data.DataLoader `__ 를권장합니다. (튜토리얼은 `여기 `__ 있습니다.)이는 ``getitem()`` 과 ``len()`` 프로토콜을 구현한 맵 형태(map-style)의 데이터셋으로 동작하며, 맵(map)처럼 인덱스/키로 데이터 샘플을 얻어옵니다.또한, 셔플(shuffle) 인자를 ``False`` 로 설정하면 반복 가능한(iteratable) 데이터셋처럼 동작합니다.모델로 보내기 전, ``collate_fn`` 함수는 ``DataLoader`` 로부터 생성된 샘플 배치로 동작합니다.``collate_fn`` 의 입력은 ``DataLoader`` 에 배치 크기(batch size)가 있는 배치(batch) 데이터이며,``collate_fn`` 은 이를 미리 선언된 데이터 처리 파이프라인에 따라 처리합니다.``collate_fn`` 이 최상위 수준으로 정의(top level def)되었는지 확인합니다. 이렇게 하면 모든 워커에서 이 함수를 사용할 수 있습니다.아래 예제에서, 주어진(original) 데이터 배치의 텍스트 항목들은 리스트(list)에 담긴(pack) 뒤 ``nn.EmbeddingBag`` 의 입력을 위한 하나의 텐서(tensor)로 합쳐(concatenate)집니다.오프셋(offset)은 텍스트 텐서(text tensor)에서 개별 시퀀스 시작 인덱스를 표현하기 위한 구분자(delimiter) 텐서입니다.레이블(label)은 개별 텍스트 항목의 레이블을 저장하는 텐서입니다.
###Code
from torch.utils.data import DataLoader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def collate_batch(batch):
label_list, text_list, offsets = [], [], [0]
for (_label, _text) in batch:
label_list.append(label_pipeline(_label))
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(device), text_list.to(device), offsets.to(device)
train_iter = AG_NEWS(split='train')
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)
###Output
_____no_output_____
###Markdown
모델 정의하기---------------모델은`nn.EmbeddingBag `__레이어와 분류(classification) 목적을 위한 선형 레이어로 구성됩니다.기본 모드가 "평균(mean)"인 ``nn.EmbeddingBag`` 은 임베딩들의 "가방(bag)"의 평균 값을 계산합니다.이때 텍스트(text) 항목들은 각기 그 길이가 다를 수 있지만, ``nn.EmbeddingBag`` 모듈은 텍스트의 길이를오프셋(offset)으로 저장하고 있으므로 패딩(padding)이 필요하지는 않습니다.덧붙여서, ``nn.EmbeddingBag`` 은 임베딩의 평균을 즉시 계산하기 때문에,텐서들의 시퀀스를 처리할 때 성능 및 메모리 효율성 측면에서의 장점도갖고 있습니다.
###Code
from torch import nn
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
###Output
_____no_output_____
###Markdown
인스턴스 생성하기-----------------``AG_NEWS`` 데이터셋에는 4종류의 레이블이 존재하므로 클래스의 개수도 4개입니다.:: 1 : World (세계) 2 : Sports (스포츠) 3 : Business (경제) 4 : Sci/Tec (과학/기술)임베딩 차원이 64인 모델을 만듭니다.어휘집의 크기(Vocab size)는 어휘집(vocab)의 길이와 같습니다.클래스의 개수는 레이블의 개수와 같습니다.
###Code
train_iter = AG_NEWS(split='train')
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModel(vocab_size, emsize, num_class).to(device)
###Output
_____no_output_____
###Markdown
모델을 학습하고 결과를 평가하는 함수 정의하기---------------------------------------------
###Code
import time
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 500
start_time = time.time()
for idx, (label, text, offsets) in enumerate(dataloader):
optimizer.zero_grad()
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches '
'| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),
total_acc/total_count))
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (label, text, offsets) in enumerate(dataloader):
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc/total_count
###Output
_____no_output_____
###Markdown
데이터셋을 분할하고 모델 수행하기---------------------------------원본 ``AG_NEWS`` 에는 검증용 데이터가 포함되어 있지 않기 때문에, 우리는 학습데이터를 학습 및 검증 데이터로 분할하려 합니다. 이때 데이터를 분할하는비율은 0.95(학습)와 0.05(검증) 입니다. 우리는 여기서 PyTorch의핵심 라이브러리 중 하나인`torch.utils.data.dataset.random_split `__함수를 사용합니다.`CrossEntropyLoss `__기준(criterion)은 각 클래스에 대해 ``nn.LogSoftmax()`` 와 ``nn.NLLLoss()`` 를합쳐놓은 방식입니다.`SGD `__optimizer는 확률적 경사 하강법를 구현해놓은 것입니다. 처음의 학습률은5.0으로 두었습니다. 매 에폭을 진행하면서 학습률을 조절할 때는`StepLR `__을 사용합니다.
###Code
from torch.utils.data.dataset import random_split
# Hyperparameters
EPOCHS = 10 # epoch
LR = 5 # learning rate
BATCH_SIZE = 64 # batch size for training
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = list(train_iter)
test_dataset = list(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = \
random_split(train_dataset, [num_train, len(train_dataset) - num_train])
train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
for epoch in range(1, EPOCHS + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(valid_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print('-' * 59)
print('| end of epoch {:3d} | time: {:5.2f}s | '
'valid accuracy {:8.3f} '.format(epoch,
time.time() - epoch_start_time,
accu_val))
print('-' * 59)
###Output
_____no_output_____
###Markdown
이 모델을 GPU 상에서 수행하고 다음과 같은 결과를 얻었습니다::: | epoch 1 | 500/ 1782 batches | accuracy 0.684 | epoch 1 | 1000/ 1782 batches | accuracy 0.852 | epoch 1 | 1500/ 1782 batches | accuracy 0.877 ----------------------------------------------------------- | end of epoch 1 | time: 8.33s | valid accuracy 0.867 ----------------------------------------------------------- | epoch 2 | 500/ 1782 batches | accuracy 0.895 | epoch 2 | 1000/ 1782 batches | accuracy 0.900 | epoch 2 | 1500/ 1782 batches | accuracy 0.903 ----------------------------------------------------------- | end of epoch 2 | time: 8.18s | valid accuracy 0.890 ----------------------------------------------------------- | epoch 3 | 500/ 1782 batches | accuracy 0.914 | epoch 3 | 1000/ 1782 batches | accuracy 0.914 | epoch 3 | 1500/ 1782 batches | accuracy 0.916 ----------------------------------------------------------- | end of epoch 3 | time: 8.20s | valid accuracy 0.897 ----------------------------------------------------------- | epoch 4 | 500/ 1782 batches | accuracy 0.926 | epoch 4 | 1000/ 1782 batches | accuracy 0.924 | epoch 4 | 1500/ 1782 batches | accuracy 0.921 ----------------------------------------------------------- | end of epoch 4 | time: 8.18s | valid accuracy 0.895 ----------------------------------------------------------- | epoch 5 | 500/ 1782 batches | accuracy 0.938 | epoch 5 | 1000/ 1782 batches | accuracy 0.935 | epoch 5 | 1500/ 1782 batches | accuracy 0.937 ----------------------------------------------------------- | end of epoch 5 | time: 8.16s | valid accuracy 0.902 ----------------------------------------------------------- | epoch 6 | 500/ 1782 batches | accuracy 0.939 | epoch 6 | 1000/ 1782 batches | accuracy 0.939 | epoch 6 | 1500/ 1782 batches | accuracy 0.938 ----------------------------------------------------------- | end of epoch 6 | time: 8.16s | valid accuracy 0.906 ----------------------------------------------------------- | epoch 7 | 500/ 1782 batches | accuracy 0.941 | epoch 7 | 1000/ 1782 batches | accuracy 0.939 | epoch 7 | 1500/ 1782 batches | accuracy 0.939 ----------------------------------------------------------- | end of epoch 7 | time: 8.19s | valid accuracy 0.903 ----------------------------------------------------------- | epoch 8 | 500/ 1782 batches | accuracy 0.942 | epoch 8 | 1000/ 1782 batches | accuracy 0.941 | epoch 8 | 1500/ 1782 batches | accuracy 0.942 ----------------------------------------------------------- | end of epoch 8 | time: 8.16s | valid accuracy 0.904 ----------------------------------------------------------- | epoch 9 | 500/ 1782 batches | accuracy 0.942 | epoch 9 | 1000/ 1782 batches | accuracy 0.941 | epoch 9 | 1500/ 1782 batches | accuracy 0.942 ----------------------------------------------------------- end of epoch 9 | time: 8.16s | valid accuracy 0.904 ----------------------------------------------------------- | epoch 10 | 500/ 1782 batches | accuracy 0.940 | epoch 10 | 1000/ 1782 batches | accuracy 0.942 | epoch 10 | 1500/ 1782 batches | accuracy 0.942 ----------------------------------------------------------- | end of epoch 10 | time: 8.15s | valid accuracy 0.904 ----------------------------------------------------------- 평가 데이터로 모델 평가하기------------------------------- 평가 데이터셋을 통한 결과를 확인합니다...
###Code
print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))
###Output
_____no_output_____
###Markdown
:: test accuracy 0.906 임의의 뉴스로 평가하기----------------------현재까지 최고의 모델로 골프 뉴스를 테스트해보겠습니다.
###Code
ag_news_label = {1: "World",
2: "Sports",
3: "Business",
4: "Sci/Tec"}
def predict(text, text_pipeline):
with torch.no_grad():
text = torch.tensor(text_pipeline(text))
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
model = model.to("cpu")
print("This is a %s news" %ag_news_label[predict(ex_text_str, text_pipeline)])
###Output
_____no_output_____
###Markdown
torchtext 라이브러리로 텍스트 분류하기===============================================**번역**: `김강민 `_ , `김진현 `_이 튜토리얼에서는 torchtext 라이브러리를 사용하여 어떻게 텍스트 분류 분석을 위한 데이터셋을 만드는지를 살펴보겠습니다.다음과 같은 내용들을 알게 됩니다: - 반복자(iterator)로 가공되지 않은 데이터(raw data)에 접근하기 - 가공되지 않은 텍스트 문장들을 모델 학습에 사용할 수 있는 ``torch.Tensor`` 로 변환하는 데이터 처리 파이프라인 만들기 - `torch.utils.data.DataLoader `__ 를 사용하여 데이터를 섞고 반복하기(shuffle and iterate) 기초 데이터셋 반복자(raw data iterator)에 접근하기-------------------------------------------------------------torchtext 라이브러리는 가공되지 않은 텍스트 문장들을 만드는(yield) 몇 가지 기초 데이터셋 반복자(raw dataset iterator)를 제공합니다.예를 들어, ``AG_NEWS`` 데이터셋 반복자는 레이블(label)과 문장의 튜플(tuple) 형태로 가공되지 않은 데이터를 만듭니다.
###Code
import torch
from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train')
###Output
_____no_output_____
###Markdown
:: next(train_iter) >>> (3, "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green again.") next(train_iter) >>> (3, 'Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\\which has a reputation for making well-timed and occasionally\\controversial plays in the defense industry, has quietly placed\\its bets on another part of the market.') next(train_iter) >>> (3, "Oil and Economy Cloud Stocks' Outlook (Reuters) Reuters - Soaring crude prices plus worries\\about the economy and the outlook for earnings are expected to\\hang over the stock market next week during the depth of the\\summer doldrums.") 데이터 처리 파이프라인 준비하기---------------------------------어휘집(vocab), 단어 벡터(word vector), 토크나이저(tokenizer)를 포함하여 torchtext 라이브러리의 가장 기본적인 구성요소를 재검토했습니다.이들은 가공되지 않은 텍스트 문자열에 대한 기본적인 데이터 처리 빌딩 블록(data processing building block)입니다.다음은 토크나이저 및 어휘집을 사용한 일반적인 NLP 데이터 처리의 예입니다.첫번째 단계는 가공되지 않은 학습 데이터셋으로 어휘집을 만드는 것입니다.여기에서는 토큰의 목록 또는 반복자를 받는 내장(built-in) 팩토리 함수(factory function) `build_vocab_from_iterator` 를 사용합니다.사용자는 어휘집에 추가할 특수 기호(special symbol) 같은 것들을 전달할 수도 있습니다.
###Code
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
tokenizer = get_tokenizer('basic_english')
train_iter = AG_NEWS(split='train')
def yield_tokens(data_iter):
for _, text in data_iter:
yield tokenizer(text)
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
###Output
_____no_output_____
###Markdown
어휘집 블록(vocabulary block)은 토큰 목록을 정수로 변환합니다.:: vocab(['here', 'is', 'an', 'example']) >>> [475, 21, 30, 5286]토크나이저와 어휘집을 갖춘 텍스트 처리 파이프라인을 준비합니다.텍스트 파이프라인과 레이블(label) 파이프라인은 데이터셋 반복자로부터 얻어온 가공되지 않은 문장 데이터를 처리하기 위해 사용됩니다.
###Code
text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
###Output
_____no_output_____
###Markdown
텍스트 파이프라인은 어휘집에 정의된 룩업 테이블(순람표; lookup table)에 기반하여 텍스트 문장을 정수 목록으로 변환합니다.레이블(label) 파이프라인은 레이블을 정수로 변환합니다. 예를 들어,:: text_pipeline('here is the an example') >>> [475, 21, 2, 30, 5286] label_pipeline('10') >>> 9 데이터 배치(batch)와 반복자 생성하기----------------------------------------`torch.utils.data.DataLoader `__ 를권장합니다. (튜토리얼은 `여기 `__ 있습니다.)이는 ``getitem()`` 과 ``len()`` 프로토콜을 구현한 맵 형태(map-style)의 데이터셋으로 동작하며, 맵(map)처럼 인덱스/키로 데이터 샘플을 얻어옵니다.또한, 셔플(shuffle) 인자를 ``False`` 로 설정하면 순회 가능한(iterable) 데이터셋처럼 동작합니다.모델로 보내기 전, ``collate_fn`` 함수는 ``DataLoader`` 로부터 생성된 샘플 배치로 동작합니다.``collate_fn`` 의 입력은 ``DataLoader`` 에 배치 크기(batch size)가 있는 배치(batch) 데이터이며,``collate_fn`` 은 이를 미리 선언된 데이터 처리 파이프라인에 따라 처리합니다.``collate_fn`` 이 최상위 수준으로 정의(top level def)되었는지 확인합니다. 이렇게 하면 모든 워커에서 이 함수를 사용할 수 있습니다.아래 예제에서, 주어진(original) 데이터 배치의 텍스트 항목들은 리스트(list)에 담긴(pack) 뒤 ``nn.EmbeddingBag`` 의 입력을 위한 하나의 tensor로 합쳐(concatenate)집니다.오프셋(offset)은 텍스트 tensor에서 개별 시퀀스 시작 인덱스를 표현하기 위한 구분자(delimiter) tensor입니다.레이블(label)은 개별 텍스트 항목의 레이블을 저장하는 tensor입니다.
###Code
from torch.utils.data import DataLoader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def collate_batch(batch):
label_list, text_list, offsets = [], [], [0]
for (_label, _text) in batch:
label_list.append(label_pipeline(_label))
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(device), text_list.to(device), offsets.to(device)
train_iter = AG_NEWS(split='train')
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)
###Output
_____no_output_____
###Markdown
모델 정의하기---------------모델은`nn.EmbeddingBag `__레이어와 분류(classification) 목적을 위한 선형 레이어로 구성됩니다.기본 모드가 "평균(mean)"인 ``nn.EmbeddingBag`` 은 임베딩들의 "가방(bag)"의 평균 값을 계산합니다.이때 텍스트(text) 항목들은 각기 그 길이가 다를 수 있지만, ``nn.EmbeddingBag`` 모듈은 텍스트의 길이를오프셋(offset)으로 저장하고 있으므로 패딩(padding)이 필요하지는 않습니다.덧붙여서, ``nn.EmbeddingBag`` 은 임베딩의 평균을 즉시 계산하기 때문에,tensor들의 시퀀스를 처리할 때 성능 및 메모리 효율성 측면에서의 장점도갖고 있습니다.
###Code
from torch import nn
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
###Output
_____no_output_____
###Markdown
인스턴스 생성하기-----------------``AG_NEWS`` 데이터셋에는 4종류의 레이블이 존재하므로 클래스의 개수도 4개입니다.:: 1 : World (세계) 2 : Sports (스포츠) 3 : Business (경제) 4 : Sci/Tec (과학/기술)임베딩 차원이 64인 모델을 만듭니다.어휘집의 크기(Vocab size)는 어휘집(vocab)의 길이와 같습니다.클래스의 개수는 레이블의 개수와 같습니다.
###Code
train_iter = AG_NEWS(split='train')
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModel(vocab_size, emsize, num_class).to(device)
###Output
_____no_output_____
###Markdown
모델을 학습하고 결과를 평가하는 함수 정의하기---------------------------------------------
###Code
import time
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 500
start_time = time.time()
for idx, (label, text, offsets) in enumerate(dataloader):
optimizer.zero_grad()
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches '
'| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),
total_acc/total_count))
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (label, text, offsets) in enumerate(dataloader):
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc/total_count
###Output
_____no_output_____
###Markdown
데이터셋을 분할하고 모델 수행하기---------------------------------원본 ``AG_NEWS`` 에는 검증용 데이터가 포함되어 있지 않기 때문에, 우리는 학습데이터를 학습 및 검증 데이터로 분할하려 합니다. 이때 데이터를 분할하는비율은 0.95(학습)와 0.05(검증) 입니다. 우리는 여기서 PyTorch의핵심 라이브러리 중 하나인`torch.utils.data.dataset.random_split `__함수를 사용합니다.`CrossEntropyLoss `__기준(criterion)은 각 클래스에 대해 ``nn.LogSoftmax()`` 와 ``nn.NLLLoss()`` 를합쳐놓은 방식입니다.`SGD `__optimizer는 확률적 경사 하강법를 구현해놓은 것입니다. 처음의 학습률은5.0으로 두었습니다. 매 에폭을 진행하면서 학습률을 조절할 때는`StepLR `__을 사용합니다.
###Code
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
# Hyperparameters
EPOCHS = 10 # epoch
LR = 5 # learning rate
BATCH_SIZE = 64 # batch size for training
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = \
random_split(train_dataset, [num_train, len(train_dataset) - num_train])
train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
for epoch in range(1, EPOCHS + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(valid_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print('-' * 59)
print('| end of epoch {:3d} | time: {:5.2f}s | '
'valid accuracy {:8.3f} '.format(epoch,
time.time() - epoch_start_time,
accu_val))
print('-' * 59)
###Output
_____no_output_____
###Markdown
평가 데이터로 모델 평가하기------------------------------- 평가 데이터셋을 통한 결과를 확인합니다...
###Code
print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))
###Output
_____no_output_____
###Markdown
임의의 뉴스로 평가하기----------------------현재까지 최고의 모델로 골프 뉴스를 테스트해보겠습니다.
###Code
ag_news_label = {1: "World",
2: "Sports",
3: "Business",
4: "Sci/Tec"}
def predict(text, text_pipeline):
with torch.no_grad():
text = torch.tensor(text_pipeline(text))
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
model = model.to("cpu")
print("This is a %s news" %ag_news_label[predict(ex_text_str, text_pipeline)])
###Output
_____no_output_____
###Markdown
torchtext 라이브러리로 텍스트 분류하기===============================================**번역**: `김강민 `_ , `김진현 `_이 튜토리얼에서는 torchtext 라이브러리를 사용하여 어떻게 텍스트 분류 분석을 위한 데이터셋을 만드는지를 살펴보겠습니다.다음과 같은 내용들을 알게 됩니다: - 반복자(iterator)로 가공되지 않은 데이터(raw data)에 접근하기 - 가공되지 않은 텍스트 문장들을 모델 학습에 사용할 수 있는 ``torch.Tensor`` 로 변환하는 데이터 처리 파이프라인 만들기 - `torch.utils.data.DataLoader `__ 를 사용하여 데이터를 섞고 반복하기(shuffle and iterate) 기초 데이터셋 반복자(raw data iterator)에 접근하기-------------------------------------------------------------torchtext 라이브러리는 가공되지 않은 텍스트 문장들을 만드는(yield) 몇 가지 기초 데이터셋 반복자(raw dataset iterator)를 제공합니다.예를 들어, ``AG_NEWS`` 데이터셋 반복자는 레이블(label)과 문장의 튜플(tuple) 형태로 가공되지 않은 데이터를 만듭니다.
###Code
import torch
from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train')
###Output
_____no_output_____
###Markdown
:: next(train_iter) >>> (3, "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green again.") next(train_iter) >>> (3, 'Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\\which has a reputation for making well-timed and occasionally\\controversial plays in the defense industry, has quietly placed\\its bets on another part of the market.') next(train_iter) >>> (3, "Oil and Economy Cloud Stocks' Outlook (Reuters) Reuters - Soaring crude prices plus worries\\about the economy and the outlook for earnings are expected to\\hang over the stock market next week during the depth of the\\summer doldrums.") 데이터 처리 파이프라인 준비하기---------------------------------어휘집(vocab), 단어 벡터(word vector), 토크나이저(tokenizer)를 포함하여 torchtext 라이브러리의 가장 기본적인 구성요소를 재검토했습니다.이들은 가공되지 않은 텍스트 문자열에 대한 기본적인 데이터 처리 빌딩 블록(data processing building block)입니다.다음은 토크나이저 및 어휘집을 사용한 일반적인 NLP 데이터 처리의 예입니다.첫번째 단계는 가공되지 않은 학습 데이터셋으로 어휘집을 만드는 것입니다.여기에서는 토큰의 목록 또는 반복자를 받는 내장(built-in) 팩토리 함수(factory function) `build_vocab_from_iterator` 를 사용합니다.사용자는 어휘집에 추가할 특수 기호(special symbol) 같은 것들을 전달할 수도 있습니다.
###Code
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
tokenizer = get_tokenizer('basic_english')
train_iter = AG_NEWS(split='train')
def yield_tokens(data_iter):
for _, text in data_iter:
yield tokenizer(text)
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
###Output
_____no_output_____
###Markdown
어휘집 블록(vocabulary block)은 토큰 목록을 정수로 변환합니다.:: vocab(['here', 'is', 'an', 'example']) >>> [475, 21, 30, 5286]토크나이저와 어휘집을 갖춘 텍스트 처리 파이프라인을 준비합니다.텍스트 파이프라인과 레이블(label) 파이프라인은 데이터셋 반복자로부터 얻어온 가공되지 않은 문장 데이터를 처리하기 위해 사용됩니다.
###Code
text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
###Output
_____no_output_____
###Markdown
텍스트 파이프라인은 어휘집에 정의된 룩업 테이블(순람표; lookup table)에 기반하여 텍스트 문장을 정수 목록으로 변환합니다.레이블(label) 파이프라인은 레이블을 정수로 변환합니다. 예를 들어,:: text_pipeline('here is the an example') >>> [475, 21, 2, 30, 5286] label_pipeline('10') >>> 9 데이터 배치(batch)와 반복자 생성하기----------------------------------------`torch.utils.data.DataLoader `__ 를권장합니다. (튜토리얼은 `여기 `__ 있습니다.)이는 ``getitem()`` 과 ``len()`` 프로토콜을 구현한 맵 형태(map-style)의 데이터셋으로 동작하며, 맵(map)처럼 인덱스/키로 데이터 샘플을 얻어옵니다.또한, 셔플(shuffle) 인자를 ``False`` 로 설정하면 순회 가능한(iterable) 데이터셋처럼 동작합니다.모델로 보내기 전, ``collate_fn`` 함수는 ``DataLoader`` 로부터 생성된 샘플 배치로 동작합니다.``collate_fn`` 의 입력은 ``DataLoader`` 에 배치 크기(batch size)가 있는 배치(batch) 데이터이며,``collate_fn`` 은 이를 미리 선언된 데이터 처리 파이프라인에 따라 처리합니다.``collate_fn`` 이 최상위 수준으로 정의(top level def)되었는지 확인합니다. 이렇게 하면 모든 워커에서 이 함수를 사용할 수 있습니다.아래 예제에서, 주어진(original) 데이터 배치의 텍스트 항목들은 리스트(list)에 담긴(pack) 뒤 ``nn.EmbeddingBag`` 의 입력을 위한 하나의 tensor로 합쳐(concatenate)집니다.오프셋(offset)은 텍스트 tensor에서 개별 시퀀스 시작 인덱스를 표현하기 위한 구분자(delimiter) tensor입니다.레이블(label)은 개별 텍스트 항목의 레이블을 저장하는 tensor입니다.
###Code
from torch.utils.data import DataLoader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def collate_batch(batch):
label_list, text_list, offsets = [], [], [0]
for (_label, _text) in batch:
label_list.append(label_pipeline(_label))
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(device), text_list.to(device), offsets.to(device)
train_iter = AG_NEWS(split='train')
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)
###Output
_____no_output_____
###Markdown
모델 정의하기---------------모델은`nn.EmbeddingBag `__레이어와 분류(classification) 목적을 위한 선형 레이어로 구성됩니다.기본 모드가 "평균(mean)"인 ``nn.EmbeddingBag`` 은 임베딩들의 "가방(bag)"의 평균 값을 계산합니다.이때 텍스트(text) 항목들은 각기 그 길이가 다를 수 있지만, ``nn.EmbeddingBag`` 모듈은 텍스트의 길이를오프셋(offset)으로 저장하고 있으므로 패딩(padding)이 필요하지는 않습니다.덧붙여서, ``nn.EmbeddingBag`` 은 임베딩의 평균을 즉시 계산하기 때문에,tensor들의 시퀀스를 처리할 때 성능 및 메모리 효율성 측면에서의 장점도갖고 있습니다.
###Code
from torch import nn
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
###Output
_____no_output_____
###Markdown
인스턴스 생성하기-----------------``AG_NEWS`` 데이터셋에는 4종류의 레이블이 존재하므로 클래스의 개수도 4개입니다.:: 1 : World (세계) 2 : Sports (스포츠) 3 : Business (경제) 4 : Sci/Tec (과학/기술)임베딩 차원이 64인 모델을 만듭니다.어휘집의 크기(Vocab size)는 어휘집(vocab)의 길이와 같습니다.클래스의 개수는 레이블의 개수와 같습니다.
###Code
train_iter = AG_NEWS(split='train')
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModel(vocab_size, emsize, num_class).to(device)
###Output
_____no_output_____
###Markdown
모델을 학습하고 결과를 평가하는 함수 정의하기---------------------------------------------
###Code
import time
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 500
start_time = time.time()
for idx, (label, text, offsets) in enumerate(dataloader):
optimizer.zero_grad()
predicted_label = model(text, offsets)
loss = criterion(predicted_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches '
'| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),
total_acc/total_count))
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (label, text, offsets) in enumerate(dataloader):
predicted_label = model(text, offsets)
loss = criterion(predicted_label, label)
total_acc += (predicted_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc/total_count
###Output
_____no_output_____
###Markdown
데이터셋을 분할하고 모델 수행하기---------------------------------원본 ``AG_NEWS`` 에는 검증용 데이터가 포함되어 있지 않기 때문에, 우리는 학습데이터를 학습 및 검증 데이터로 분할하려 합니다. 이때 데이터를 분할하는비율은 0.95(학습)와 0.05(검증) 입니다. 우리는 여기서 PyTorch의핵심 라이브러리 중 하나인`torch.utils.data.dataset.random_split `__함수를 사용합니다.`CrossEntropyLoss `__기준(criterion)은 각 클래스에 대해 ``nn.LogSoftmax()`` 와 ``nn.NLLLoss()`` 를합쳐놓은 방식입니다.`SGD `__optimizer는 확률적 경사 하강법를 구현해놓은 것입니다. 처음의 학습률은5.0으로 두었습니다. 매 에폭을 진행하면서 학습률을 조절할 때는`StepLR `__을 사용합니다.
###Code
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
# Hyperparameters
EPOCHS = 10 # epoch
LR = 5 # learning rate
BATCH_SIZE = 64 # batch size for training
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = \
random_split(train_dataset, [num_train, len(train_dataset) - num_train])
train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
for epoch in range(1, EPOCHS + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(valid_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print('-' * 59)
print('| end of epoch {:3d} | time: {:5.2f}s | '
'valid accuracy {:8.3f} '.format(epoch,
time.time() - epoch_start_time,
accu_val))
print('-' * 59)
###Output
_____no_output_____
###Markdown
평가 데이터로 모델 평가하기------------------------------- 평가 데이터셋을 통한 결과를 확인합니다...
###Code
print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))
###Output
_____no_output_____
###Markdown
임의의 뉴스로 평가하기----------------------현재까지 최고의 모델로 골프 뉴스를 테스트해보겠습니다.
###Code
ag_news_label = {1: "World",
2: "Sports",
3: "Business",
4: "Sci/Tec"}
def predict(text, text_pipeline):
with torch.no_grad():
text = torch.tensor(text_pipeline(text))
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
model = model.to("cpu")
print("This is a %s news" %ag_news_label[predict(ex_text_str, text_pipeline)])
###Output
_____no_output_____
###Markdown
torchtext 라이브러리로 텍스트 분류하기===============================================**번역**: `김강민 `_ , `김진현 `_이 튜토리얼에서는 torchtext 라이브러리를 사용하여 어떻게 텍스트 분류 분석을 위한 데이터셋을 만드는지를 살펴보겠습니다.다음과 같은 내용들을 알게 됩니다: - 반복자(iterator)로 가공되지 않은 데이터(raw data)에 접근하기 - 가공되지 않은 텍스트 문장들을 모델 학습에 사용할 수 있는 ``torch.Tensor`` 로 변환하는 데이터 처리 파이프라인 만들기 - `torch.utils.data.DataLoader `__ 를 사용하여 데이터를 섞고 반복하기(shuffle and iterate) 기초 데이터셋 반복자(raw data iterator)에 접근하기-------------------------------------------------------------torchtext 라이브러리는 가공되지 않은 텍스트 문장들을 만드는(yield) 몇 가지 기초 데이터셋 반복자(raw dataset iterator)를 제공합니다.예를 들어, ``AG_NEWS`` 데이터셋 반복자는 레이블(label)과 문장의 튜플(tuple) 형태로 가공되지 않은 데이터를 만듭니다.
###Code
import torch
from torchtext.datasets import AG_NEWS
train_iter = AG_NEWS(split='train')
###Output
_____no_output_____
###Markdown
:: next(train_iter) >>> (3, "Wall St. Bears Claw Back Into the Black (Reuters) Reuters - Short-sellers, Wall Street's dwindling\\band of ultra-cynics, are seeing green again.") next(train_iter) >>> (3, 'Carlyle Looks Toward Commercial Aerospace (Reuters) Reuters - Private investment firm Carlyle Group,\\which has a reputation for making well-timed and occasionally\\controversial plays in the defense industry, has quietly placed\\its bets on another part of the market.') next(train_iter) >>> (3, "Oil and Economy Cloud Stocks' Outlook (Reuters) Reuters - Soaring crude prices plus worries\\about the economy and the outlook for earnings are expected to\\hang over the stock market next week during the depth of the\\summer doldrums.") 데이터 처리 파이프라인 준비하기---------------------------------어휘집(vocab), 단어 벡터(word vector), 토크나이저(tokenizer)를 포함하여 torchtext 라이브러리의 가장 기본적인 구성요소를 재검토했습니다.이들은 가공되지 않은 텍스트 문자열에 대한 기본적인 데이터 처리 빌딩 블록(data processing building block)입니다.다음은 토크나이저 및 어휘집을 사용한 일반적인 NLP 데이터 처리의 예입니다.첫번째 단계는 가공되지 않은 학습 데이터셋으로 어휘집을 만드는 것입니다.여기에서는 토큰의 목록 또는 반복자를 받는 내장(built-in) 팩토리 함수(factory function) `build_vocab_from_iterator` 를 사용합니다.사용자는 어휘집에 추가할 특수 기호(special symbol) 같은 것들을 전달할 수도 있습니다.
###Code
from torchtext.data.utils import get_tokenizer
from torchtext.vocab import build_vocab_from_iterator
tokenizer = get_tokenizer('basic_english')
train_iter = AG_NEWS(split='train')
def yield_tokens(data_iter):
for _, text in data_iter:
yield tokenizer(text)
vocab = build_vocab_from_iterator(yield_tokens(train_iter), specials=["<unk>"])
vocab.set_default_index(vocab["<unk>"])
###Output
_____no_output_____
###Markdown
어휘집 블록(vocabulary block)은 토큰 목록을 정수로 변환합니다.:: vocab(['here', 'is', 'an', 'example']) >>> [475, 21, 30, 5286]토크나이저와 어휘집을 갖춘 텍스트 처리 파이프라인을 준비합니다.텍스트 파이프라인과 레이블(label) 파이프라인은 데이터셋 반복자로부터 얻어온 가공되지 않은 문장 데이터를 처리하기 위해 사용됩니다.
###Code
text_pipeline = lambda x: vocab(tokenizer(x))
label_pipeline = lambda x: int(x) - 1
###Output
_____no_output_____
###Markdown
텍스트 파이프라인은 어휘집에 정의된 룩업 테이블(순람표; lookup table)에 기반하여 텍스트 문장을 정수 목록으로 변환합니다.레이블(label) 파이프라인은 레이블을 정수로 변환합니다. 예를 들어,:: text_pipeline('here is the an example') >>> [475, 21, 2, 30, 5286] label_pipeline('10') >>> 9 데이터 배치(batch)와 반복자 생성하기----------------------------------------`torch.utils.data.DataLoader `__ 를권장합니다. (튜토리얼은 `여기 `__ 있습니다.)이는 ``getitem()`` 과 ``len()`` 프로토콜을 구현한 맵 형태(map-style)의 데이터셋으로 동작하며, 맵(map)처럼 인덱스/키로 데이터 샘플을 얻어옵니다.또한, 셔플(shuffle) 인자를 ``False`` 로 설정하면 반복 가능한(iteratable) 데이터셋처럼 동작합니다.모델로 보내기 전, ``collate_fn`` 함수는 ``DataLoader`` 로부터 생성된 샘플 배치로 동작합니다.``collate_fn`` 의 입력은 ``DataLoader`` 에 배치 크기(batch size)가 있는 배치(batch) 데이터이며,``collate_fn`` 은 이를 미리 선언된 데이터 처리 파이프라인에 따라 처리합니다.``collate_fn`` 이 최상위 수준으로 정의(top level def)되었는지 확인합니다. 이렇게 하면 모든 워커에서 이 함수를 사용할 수 있습니다.아래 예제에서, 주어진(original) 데이터 배치의 텍스트 항목들은 리스트(list)에 담긴(pack) 뒤 ``nn.EmbeddingBag`` 의 입력을 위한 하나의 텐서(tensor)로 합쳐(concatenate)집니다.오프셋(offset)은 텍스트 텐서(text tensor)에서 개별 시퀀스 시작 인덱스를 표현하기 위한 구분자(delimiter) 텐서입니다.레이블(label)은 개별 텍스트 항목의 레이블을 저장하는 텐서입니다.
###Code
from torch.utils.data import DataLoader
device = torch.device("cuda" if torch.cuda.is_available() else "cpu")
def collate_batch(batch):
label_list, text_list, offsets = [], [], [0]
for (_label, _text) in batch:
label_list.append(label_pipeline(_label))
processed_text = torch.tensor(text_pipeline(_text), dtype=torch.int64)
text_list.append(processed_text)
offsets.append(processed_text.size(0))
label_list = torch.tensor(label_list, dtype=torch.int64)
offsets = torch.tensor(offsets[:-1]).cumsum(dim=0)
text_list = torch.cat(text_list)
return label_list.to(device), text_list.to(device), offsets.to(device)
train_iter = AG_NEWS(split='train')
dataloader = DataLoader(train_iter, batch_size=8, shuffle=False, collate_fn=collate_batch)
###Output
_____no_output_____
###Markdown
모델 정의하기---------------모델은`nn.EmbeddingBag `__레이어와 분류(classification) 목적을 위한 선형 레이어로 구성됩니다.기본 모드가 "평균(mean)"인 ``nn.EmbeddingBag`` 은 임베딩들의 "가방(bag)"의 평균 값을 계산합니다.이때 텍스트(text) 항목들은 각기 그 길이가 다를 수 있지만, ``nn.EmbeddingBag`` 모듈은 텍스트의 길이를오프셋(offset)으로 저장하고 있으므로 패딩(padding)이 필요하지는 않습니다.덧붙여서, ``nn.EmbeddingBag`` 은 임베딩의 평균을 즉시 계산하기 때문에,텐서들의 시퀀스를 처리할 때 성능 및 메모리 효율성 측면에서의 장점도갖고 있습니다.
###Code
from torch import nn
class TextClassificationModel(nn.Module):
def __init__(self, vocab_size, embed_dim, num_class):
super(TextClassificationModel, self).__init__()
self.embedding = nn.EmbeddingBag(vocab_size, embed_dim, sparse=True)
self.fc = nn.Linear(embed_dim, num_class)
self.init_weights()
def init_weights(self):
initrange = 0.5
self.embedding.weight.data.uniform_(-initrange, initrange)
self.fc.weight.data.uniform_(-initrange, initrange)
self.fc.bias.data.zero_()
def forward(self, text, offsets):
embedded = self.embedding(text, offsets)
return self.fc(embedded)
###Output
_____no_output_____
###Markdown
인스턴스 생성하기-----------------``AG_NEWS`` 데이터셋에는 4종류의 레이블이 존재하므로 클래스의 개수도 4개입니다.:: 1 : World (세계) 2 : Sports (스포츠) 3 : Business (경제) 4 : Sci/Tec (과학/기술)임베딩 차원이 64인 모델을 만듭니다.어휘집의 크기(Vocab size)는 어휘집(vocab)의 길이와 같습니다.클래스의 개수는 레이블의 개수와 같습니다.
###Code
train_iter = AG_NEWS(split='train')
num_class = len(set([label for (label, text) in train_iter]))
vocab_size = len(vocab)
emsize = 64
model = TextClassificationModel(vocab_size, emsize, num_class).to(device)
###Output
_____no_output_____
###Markdown
모델을 학습하고 결과를 평가하는 함수 정의하기---------------------------------------------
###Code
import time
def train(dataloader):
model.train()
total_acc, total_count = 0, 0
log_interval = 500
start_time = time.time()
for idx, (label, text, offsets) in enumerate(dataloader):
optimizer.zero_grad()
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
loss.backward()
torch.nn.utils.clip_grad_norm_(model.parameters(), 0.1)
optimizer.step()
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
if idx % log_interval == 0 and idx > 0:
elapsed = time.time() - start_time
print('| epoch {:3d} | {:5d}/{:5d} batches '
'| accuracy {:8.3f}'.format(epoch, idx, len(dataloader),
total_acc/total_count))
total_acc, total_count = 0, 0
start_time = time.time()
def evaluate(dataloader):
model.eval()
total_acc, total_count = 0, 0
with torch.no_grad():
for idx, (label, text, offsets) in enumerate(dataloader):
predited_label = model(text, offsets)
loss = criterion(predited_label, label)
total_acc += (predited_label.argmax(1) == label).sum().item()
total_count += label.size(0)
return total_acc/total_count
###Output
_____no_output_____
###Markdown
데이터셋을 분할하고 모델 수행하기---------------------------------원본 ``AG_NEWS`` 에는 검증용 데이터가 포함되어 있지 않기 때문에, 우리는 학습데이터를 학습 및 검증 데이터로 분할하려 합니다. 이때 데이터를 분할하는비율은 0.95(학습)와 0.05(검증) 입니다. 우리는 여기서 PyTorch의핵심 라이브러리 중 하나인`torch.utils.data.dataset.random_split `__함수를 사용합니다.`CrossEntropyLoss `__기준(criterion)은 각 클래스에 대해 ``nn.LogSoftmax()`` 와 ``nn.NLLLoss()`` 를합쳐놓은 방식입니다.`SGD `__optimizer는 확률적 경사 하강법를 구현해놓은 것입니다. 처음의 학습률은5.0으로 두었습니다. 매 에폭을 진행하면서 학습률을 조절할 때는`StepLR `__을 사용합니다.
###Code
from torch.utils.data.dataset import random_split
from torchtext.data.functional import to_map_style_dataset
# Hyperparameters
EPOCHS = 10 # epoch
LR = 5 # learning rate
BATCH_SIZE = 64 # batch size for training
criterion = torch.nn.CrossEntropyLoss()
optimizer = torch.optim.SGD(model.parameters(), lr=LR)
scheduler = torch.optim.lr_scheduler.StepLR(optimizer, 1.0, gamma=0.1)
total_accu = None
train_iter, test_iter = AG_NEWS()
train_dataset = to_map_style_dataset(train_iter)
test_dataset = to_map_style_dataset(test_iter)
num_train = int(len(train_dataset) * 0.95)
split_train_, split_valid_ = \
random_split(train_dataset, [num_train, len(train_dataset) - num_train])
train_dataloader = DataLoader(split_train_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
valid_dataloader = DataLoader(split_valid_, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
test_dataloader = DataLoader(test_dataset, batch_size=BATCH_SIZE,
shuffle=True, collate_fn=collate_batch)
for epoch in range(1, EPOCHS + 1):
epoch_start_time = time.time()
train(train_dataloader)
accu_val = evaluate(valid_dataloader)
if total_accu is not None and total_accu > accu_val:
scheduler.step()
else:
total_accu = accu_val
print('-' * 59)
print('| end of epoch {:3d} | time: {:5.2f}s | '
'valid accuracy {:8.3f} '.format(epoch,
time.time() - epoch_start_time,
accu_val))
print('-' * 59)
###Output
_____no_output_____
###Markdown
평가 데이터로 모델 평가하기------------------------------- 평가 데이터셋을 통한 결과를 확인합니다...
###Code
print('Checking the results of test dataset.')
accu_test = evaluate(test_dataloader)
print('test accuracy {:8.3f}'.format(accu_test))
###Output
_____no_output_____
###Markdown
임의의 뉴스로 평가하기----------------------현재까지 최고의 모델로 골프 뉴스를 테스트해보겠습니다.
###Code
ag_news_label = {1: "World",
2: "Sports",
3: "Business",
4: "Sci/Tec"}
def predict(text, text_pipeline):
with torch.no_grad():
text = torch.tensor(text_pipeline(text))
output = model(text, torch.tensor([0]))
return output.argmax(1).item() + 1
ex_text_str = "MEMPHIS, Tenn. – Four days ago, Jon Rahm was \
enduring the season’s worst weather conditions on Sunday at The \
Open on his way to a closing 75 at Royal Portrush, which \
considering the wind and the rain was a respectable showing. \
Thursday’s first round at the WGC-FedEx St. Jude Invitational \
was another story. With temperatures in the mid-80s and hardly any \
wind, the Spaniard was 13 strokes better in a flawless round. \
Thanks to his best putting performance on the PGA Tour, Rahm \
finished with an 8-under 62 for a three-stroke lead, which \
was even more impressive considering he’d never played the \
front nine at TPC Southwind."
model = model.to("cpu")
print("This is a %s news" %ag_news_label[predict(ex_text_str, text_pipeline)])
###Output
_____no_output_____ |
dev/Getting uncertainty arrays for a matrix.ipynb | ###Markdown
Getting the uncertainty arraysThis notebook shows how to get the uncertainty arrays from a matrix with multiple datapackages.We need to build an LCA to do this:
###Code
fu = {bd.Database('ecoinvent 3.8 cutoff').random(): 1}
fu_mapped, packages, _ = bd.prepare_lca_inputs(demand=fu, remapping=False) # Could also add LCIA method
lca = bc.LCA(demand=fu_mapped, data_objs=packages)
lca.lci()
lca.technosphere_mm.input_uncertainties()
###Output
_____no_output_____
###Markdown
Getting the uncertainty arraysThis notebook shows how to get the uncertainty arrays from a matrix with multiple datapackages.We need to build an LCA to do this:
###Code
fu = {bd.Database('ecoinvent 3.8 cutoff').random(): 1}
fu_mapped, packages, _ = bd.prepare_lca_inputs(demand=fu, remapping=False) # Could also add LCIA method
lca = bc.LCA(demand=fu_mapped, data_objs=packages)
lca.lci()
lca.technosphere_mm.input_uncertainties()
###Output
_____no_output_____ |
.ipynb_checkpoints/Train_CNN_Analog-Readout_Version-checkpoint.ipynb | ###Markdown
CNN TrainingTarget of this code is to train a CNN network to extract the needle position of an analog needle device. Preparing the training* First all libraries are loaded * It is assumed, that they are installed during the Python setup* matplotlib is set to print the output inline in the jupyter notebook
###Code
import os
import tensorflow as tf
import matplotlib.pyplot as plt
import glob
import numpy as np
from sklearn.utils import shuffle
from tensorflow.python import keras
from tensorflow.python.keras import Sequential
from tensorflow.python.keras.layers import Dense, InputLayer, Conv2D, MaxPool2D, Flatten, BatchNormalization
from tensorflow.keras.preprocessing.image import ImageDataGenerator
from sklearn.model_selection import train_test_split
import tensorflow.keras.backend as K
from tensorflow.keras.callbacks import History
import math
from PIL import Image
loss_ges = np.array([])
val_loss_ges = np.array([])
%matplotlib inline
np.set_printoptions(precision=4)
np.set_printoptions(suppress=True)
###Output
_____no_output_____
###Markdown
Load training data* The data is expected in the "Input_dir"* Picture size must be 32x32 with 3 color channels (RGB)* The filename contains the informations needed for training in the first 3 digits::* Typical filename: * x.y-zzzz.jpg * e.g. "4.6_Lfd-1406_zeiger3_2019-06-02T050011"|Place holder | Meaning | Usage ||------------- |-----------------------------|--------------|| **x.y** | readout value | **to be learned** || zzzz | additional information | not needed |* The images are stored in the x_data[]* The expected output for each image in the corresponding y_data[] * The periodic nature is reflected in a **sin/cos coding**, which allows to restore the angle/counter value with an arctan later on.* The last step is a shuffle (from sklearn.utils) as the filenames are on order due to the encoding of the expected analog readout in the filename
###Code
Input_dir='data_resize_all'
files = glob.glob(Input_dir + '/*.*')
x_data = []
y_data = []
for aktfile in files:
test_image = Image.open(aktfile)
test_image = np.array(test_image, dtype="float32")
test_image = np.reshape(test_image, (32,32,3))
base = os.path.basename(aktfile)
target_number = (float(base[0:3])) / 10
target_sin = math.sin(target_number * math.pi * 2)
target_cos = math.cos(target_number * math.pi * 2)
x_data.append(test_image)
zw = np.array([target_sin, target_cos])
y_data.append(zw)
x_data = np.array(x_data)
y_data = np.array(y_data)
print(x_data.shape)
print(y_data.shape)
x_data, y_data = shuffle(x_data, y_data)
X_train, X_test, y_train, y_test = train_test_split(x_data, y_data, test_size=0.1)
###Output
(4947, 32, 32, 3)
(4947, 2)
###Markdown
Define the modelThe layout of the network ist a typcial CNN network with alternating **Conv2D** and **MaxPool2D** layers. Finished after **flattening** with additional **Dense** layer. Important* Shape of the input layer: (32, 32, 3)* Shape of the output layer: (2) - sin and cos
###Code
model = Sequential()
model.add(BatchNormalization(input_shape=(32,32,3)))
model.add(Conv2D(64, (5, 5), input_shape=(32,32,3), padding='same', activation="relu"))
model.add(MaxPool2D(pool_size=(4,4)))
model.add(Conv2D(32, (5, 5), padding='same', activation="relu"))
model.add(MaxPool2D(pool_size=(4,4)))
model.add(Conv2D(32, (3, 3), padding='same', activation="relu"))
model.add(MaxPool2D(pool_size=(2,2)))
model.add(Flatten())
model.add(Dense(128,activation="relu"))
model.add(Dense(64,activation="relu"))
model.add(Dense(2))
model.summary()
model.compile(loss=keras.losses.mean_squared_error, optimizer=tf.keras.optimizers.Adadelta(learning_rate=1.0, rho=0.95), metrics = ["accuracy"])
###Output
Model: "sequential"
_________________________________________________________________
Layer (type) Output Shape Param #
=================================================================
batch_normalization (BatchNo (None, 32, 32, 3) 12
_________________________________________________________________
conv2d (Conv2D) (None, 32, 32, 64) 4864
_________________________________________________________________
max_pooling2d (MaxPooling2D) (None, 8, 8, 64) 0
_________________________________________________________________
conv2d_1 (Conv2D) (None, 8, 8, 32) 51232
_________________________________________________________________
max_pooling2d_1 (MaxPooling2 (None, 2, 2, 32) 0
_________________________________________________________________
conv2d_2 (Conv2D) (None, 2, 2, 32) 9248
_________________________________________________________________
max_pooling2d_2 (MaxPooling2 (None, 1, 1, 32) 0
_________________________________________________________________
flatten (Flatten) (None, 32) 0
_________________________________________________________________
dense (Dense) (None, 128) 4224
_________________________________________________________________
dense_1 (Dense) (None, 64) 8256
_________________________________________________________________
dense_2 (Dense) (None, 2) 130
=================================================================
Total params: 77,966
Trainable params: 77,960
Non-trainable params: 6
_________________________________________________________________
###Markdown
TrainingThe input pictures are randomly scattered for brightness and pixel shift variations. These is implemented with a ImageDataGenerator.The training is splitted into two steps:1. Variation of the brightness only2. Variation of brightness and Pixel Shift Step 1: Brigthness scattering only
###Code
Batch_Size = 8
Epoch_Anz = 30
Shift_Range = 0
Brightness_Range = 0.3
datagen = ImageDataGenerator(width_shift_range=[-Shift_Range,Shift_Range], height_shift_range=[-Shift_Range,Shift_Range],brightness_range=[1-Brightness_Range,1+Brightness_Range])
train_iterator = datagen.flow(X_train, y_train, batch_size=Batch_Size)
validation_iterator = datagen.flow(X_test, y_test, batch_size=Batch_Size)
history = model.fit_generator(train_iterator, validation_data = validation_iterator, epochs = Epoch_Anz)
###Output
WARNING:tensorflow:From <ipython-input-4-9f5ba6453a73>:11: Model.fit_generator (from tensorflow.python.keras.engine.training) is deprecated and will be removed in a future version.
Instructions for updating:
Please use Model.fit, which supports generators.
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 557 steps, validate for 62 steps
Epoch 1/30
557/557 [==============================] - 11s 20ms/step - loss: 0.1815 - accuracy: 0.8156 - val_loss: 0.0351 - val_accuracy: 0.9152
Epoch 2/30
557/557 [==============================] - 11s 20ms/step - loss: 0.0182 - accuracy: 0.9638 - val_loss: 0.0082 - val_accuracy: 0.9818
Epoch 3/30
557/557 [==============================] - 11s 20ms/step - loss: 0.0097 - accuracy: 0.9670 - val_loss: 0.0113 - val_accuracy: 0.9636
Epoch 4/30
557/557 [==============================] - 12s 21ms/step - loss: 0.0068 - accuracy: 0.9760 - val_loss: 0.0047 - val_accuracy: 0.9919
Epoch 5/30
557/557 [==============================] - 12s 21ms/step - loss: 0.0048 - accuracy: 0.9746 - val_loss: 0.0038 - val_accuracy: 0.9879
Epoch 6/30
557/557 [==============================] - 12s 21ms/step - loss: 0.0038 - accuracy: 0.9805 - val_loss: 0.0023 - val_accuracy: 0.9879
Epoch 7/30
557/557 [==============================] - 13s 23ms/step - loss: 0.0031 - accuracy: 0.9805 - val_loss: 0.0038 - val_accuracy: 0.9677
Epoch 8/30
557/557 [==============================] - 13s 23ms/step - loss: 0.0027 - accuracy: 0.9807 - val_loss: 0.0055 - val_accuracy: 0.9879
Epoch 9/30
557/557 [==============================] - 13s 23ms/step - loss: 0.0024 - accuracy: 0.9856 - val_loss: 0.0049 - val_accuracy: 0.9778
Epoch 10/30
557/557 [==============================] - 13s 24ms/step - loss: 0.0022 - accuracy: 0.9834 - val_loss: 0.0025 - val_accuracy: 0.9697
Epoch 11/30
557/557 [==============================] - 13s 23ms/step - loss: 0.0019 - accuracy: 0.9841 - val_loss: 0.0027 - val_accuracy: 0.9818
Epoch 12/30
557/557 [==============================] - 13s 24ms/step - loss: 0.0018 - accuracy: 0.9836 - val_loss: 0.0015 - val_accuracy: 0.9859
Epoch 13/30
557/557 [==============================] - 13s 24ms/step - loss: 0.0016 - accuracy: 0.9856 - val_loss: 0.0025 - val_accuracy: 0.9879
Epoch 14/30
557/557 [==============================] - 13s 23ms/step - loss: 0.0015 - accuracy: 0.9834 - val_loss: 0.0012 - val_accuracy: 0.9859
Epoch 15/30
557/557 [==============================] - 12s 21ms/step - loss: 0.0014 - accuracy: 0.9854 - val_loss: 0.0015 - val_accuracy: 0.9879
Epoch 16/30
557/557 [==============================] - 13s 24ms/step - loss: 0.0013 - accuracy: 0.9863 - val_loss: 0.0018 - val_accuracy: 0.9960
Epoch 17/30
557/557 [==============================] - 14s 25ms/step - loss: 0.0013 - accuracy: 0.9876 - val_loss: 0.0024 - val_accuracy: 0.9838
Epoch 18/30
557/557 [==============================] - 13s 23ms/step - loss: 0.0011 - accuracy: 0.9852 - val_loss: 0.0012 - val_accuracy: 0.9899
Epoch 19/30
557/557 [==============================] - 11s 20ms/step - loss: 0.0011 - accuracy: 0.9867 - val_loss: 0.0015 - val_accuracy: 0.9899
Epoch 20/30
557/557 [==============================] - 13s 23ms/step - loss: 0.0011 - accuracy: 0.9865 - val_loss: 0.0015 - val_accuracy: 0.9939
Epoch 21/30
557/557 [==============================] - 14s 26ms/step - loss: 0.0010 - accuracy: 0.9874 - val_loss: 0.0017 - val_accuracy: 0.9899
Epoch 22/30
557/557 [==============================] - 13s 24ms/step - loss: 0.0010 - accuracy: 0.9888 - val_loss: 0.0014 - val_accuracy: 0.9838
Epoch 23/30
557/557 [==============================] - 12s 22ms/step - loss: 9.7439e-04 - accuracy: 0.9872 - val_loss: 0.0013 - val_accuracy: 0.9939
Epoch 24/30
557/557 [==============================] - 11s 20ms/step - loss: 9.1174e-04 - accuracy: 0.9881 - val_loss: 0.0013 - val_accuracy: 0.9838
Epoch 25/30
557/557 [==============================] - 12s 22ms/step - loss: 8.5049e-04 - accuracy: 0.9883 - val_loss: 0.0014 - val_accuracy: 0.9859
Epoch 26/30
557/557 [==============================] - 12s 22ms/step - loss: 8.7098e-04 - accuracy: 0.9881 - val_loss: 0.0012 - val_accuracy: 0.9879
Epoch 27/30
557/557 [==============================] - 13s 23ms/step - loss: 8.2587e-04 - accuracy: 0.9870 - val_loss: 0.0010 - val_accuracy: 0.9899
Epoch 28/30
557/557 [==============================] - 12s 22ms/step - loss: 7.8452e-04 - accuracy: 0.9874 - val_loss: 9.1870e-04 - val_accuracy: 0.9939
Epoch 29/30
557/557 [==============================] - 13s 23ms/step - loss: 7.8595e-04 - accuracy: 0.9872 - val_loss: 0.0010 - val_accuracy: 0.9899
Epoch 30/30
557/557 [==============================] - 13s 24ms/step - loss: 7.5799e-04 - accuracy: 0.9899 - val_loss: 0.0012 - val_accuracy: 0.9960
###Markdown
Step 1: Learing result * Visualization of the training and validation results
###Code
loss_ges = np.append(loss_ges, history.history['loss'])
val_loss_ges = np.append(val_loss_ges, history.history['val_loss'])
plt.semilogy(history.history['loss'])
plt.semilogy(history.history['val_loss'])
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','eval'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Step 2: Brigthness and Pixel Shift scatteringHere a higher number of epochs is used to reach the minimum loss function
###Code
Batch_Size = 8
Epoch_Anz = 80
Shift_Range = 3
Brightness_Range = 0.3
datagen = ImageDataGenerator(width_shift_range=[-Shift_Range,Shift_Range], height_shift_range=[-Shift_Range,Shift_Range],brightness_range=[1-Brightness_Range,1+Brightness_Range])
train_iterator = datagen.flow(X_train, y_train, batch_size=Batch_Size)
validation_iterator = datagen.flow(X_test, y_test, batch_size=Batch_Size)
history = model.fit_generator(train_iterator, validation_data = validation_iterator, epochs = Epoch_Anz)
###Output
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
WARNING:tensorflow:sample_weight modes were coerced from
...
to
['...']
Train for 557 steps, validate for 62 steps
Epoch 1/80
557/557 [==============================] - 14s 26ms/step - loss: 0.0505 - accuracy: 0.9416 - val_loss: 0.0089 - val_accuracy: 0.9596
Epoch 2/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0097 - accuracy: 0.9733 - val_loss: 0.0089 - val_accuracy: 0.9677
Epoch 3/80
557/557 [==============================] - 15s 26ms/step - loss: 0.0072 - accuracy: 0.9746 - val_loss: 0.0098 - val_accuracy: 0.9737
Epoch 4/80
557/557 [==============================] - 15s 27ms/step - loss: 0.0049 - accuracy: 0.9769 - val_loss: 0.0068 - val_accuracy: 0.9899
Epoch 5/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0043 - accuracy: 0.9802 - val_loss: 0.0038 - val_accuracy: 0.9859
Epoch 6/80
557/557 [==============================] - 13s 22ms/step - loss: 0.0036 - accuracy: 0.9775 - val_loss: 0.0031 - val_accuracy: 0.9758
Epoch 7/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0033 - accuracy: 0.9805 - val_loss: 0.0037 - val_accuracy: 0.9778
Epoch 8/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0029 - accuracy: 0.9823 - val_loss: 0.0029 - val_accuracy: 0.9879
Epoch 9/80
557/557 [==============================] - 15s 27ms/step - loss: 0.0025 - accuracy: 0.9841 - val_loss: 0.0029 - val_accuracy: 0.9818
Epoch 10/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0025 - accuracy: 0.9816 - val_loss: 0.0023 - val_accuracy: 0.9859
Epoch 11/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0023 - accuracy: 0.9856 - val_loss: 0.0025 - val_accuracy: 0.9879
Epoch 12/80
557/557 [==============================] - 14s 25ms/step - loss: 0.0021 - accuracy: 0.9858 - val_loss: 0.0022 - val_accuracy: 0.9859
Epoch 13/80
557/557 [==============================] - 18s 32ms/step - loss: 0.0020 - accuracy: 0.9847 - val_loss: 0.0021 - val_accuracy: 0.9899
Epoch 14/80
557/557 [==============================] - 15s 26ms/step - loss: 0.0020 - accuracy: 0.9852 - val_loss: 0.0024 - val_accuracy: 0.9838
Epoch 15/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0018 - accuracy: 0.9867 - val_loss: 0.0024 - val_accuracy: 0.9960
Epoch 16/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0017 - accuracy: 0.9843 - val_loss: 0.0020 - val_accuracy: 0.9919
Epoch 17/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0017 - accuracy: 0.9847 - val_loss: 0.0027 - val_accuracy: 0.9919
Epoch 18/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0016 - accuracy: 0.9854 - val_loss: 0.0027 - val_accuracy: 0.9939
Epoch 19/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0016 - accuracy: 0.9861 - val_loss: 0.0017 - val_accuracy: 0.9879
Epoch 20/80
557/557 [==============================] - 14s 25ms/step - loss: 0.0015 - accuracy: 0.9872 - val_loss: 0.0023 - val_accuracy: 0.9697
Epoch 21/80
557/557 [==============================] - 14s 24ms/step - loss: 0.0014 - accuracy: 0.9865 - val_loss: 0.0015 - val_accuracy: 0.9838
Epoch 22/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0014 - accuracy: 0.9858 - val_loss: 0.0015 - val_accuracy: 0.9919
Epoch 23/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0013 - accuracy: 0.9861 - val_loss: 0.0014 - val_accuracy: 0.9838
Epoch 24/80
557/557 [==============================] - 13s 22ms/step - loss: 0.0013 - accuracy: 0.9854 - val_loss: 0.0016 - val_accuracy: 0.9919
Epoch 25/80
557/557 [==============================] - 12s 22ms/step - loss: 0.0013 - accuracy: 0.9845 - val_loss: 0.0013 - val_accuracy: 0.9939
Epoch 26/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0013 - accuracy: 0.9870 - val_loss: 0.0017 - val_accuracy: 0.9939
Epoch 27/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0012 - accuracy: 0.9843 - val_loss: 0.0014 - val_accuracy: 0.9899
Epoch 28/80
557/557 [==============================] - 12s 22ms/step - loss: 0.0013 - accuracy: 0.9852 - val_loss: 0.0012 - val_accuracy: 0.9778
Epoch 29/80
557/557 [==============================] - 12s 22ms/step - loss: 0.0012 - accuracy: 0.9858 - val_loss: 0.0013 - val_accuracy: 0.9879
Epoch 30/80
557/557 [==============================] - 12s 21ms/step - loss: 0.0012 - accuracy: 0.9858 - val_loss: 0.0016 - val_accuracy: 0.9919
Epoch 31/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0011 - accuracy: 0.9881 - val_loss: 0.0013 - val_accuracy: 0.9919
Epoch 32/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0011 - accuracy: 0.9870 - val_loss: 0.0020 - val_accuracy: 0.9919
Epoch 33/80
557/557 [==============================] - 13s 23ms/step - loss: 0.0011 - accuracy: 0.9850 - val_loss: 0.0019 - val_accuracy: 0.9919
Epoch 34/80
557/557 [==============================] - 14s 24ms/step - loss: 0.0011 - accuracy: 0.9874 - val_loss: 0.0012 - val_accuracy: 0.9859
Epoch 35/80
557/557 [==============================] - 14s 24ms/step - loss: 0.0011 - accuracy: 0.9854 - val_loss: 0.0013 - val_accuracy: 0.9879
Epoch 36/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0011 - accuracy: 0.9890 - val_loss: 0.0012 - val_accuracy: 0.9899
Epoch 37/80
557/557 [==============================] - 14s 25ms/step - loss: 0.0011 - accuracy: 0.9885 - val_loss: 0.0011 - val_accuracy: 0.9939
Epoch 38/80
557/557 [==============================] - 13s 24ms/step - loss: 0.0011 - accuracy: 0.9854 - val_loss: 9.5592e-04 - val_accuracy: 0.9960
Epoch 39/80
557/557 [==============================] - 13s 24ms/step - loss: 9.9714e-04 - accuracy: 0.9879 - val_loss: 0.0011 - val_accuracy: 0.9939
Epoch 40/80
557/557 [==============================] - 14s 25ms/step - loss: 0.0010 - accuracy: 0.9894 - val_loss: 0.0013 - val_accuracy: 0.9899
Epoch 41/80
557/557 [==============================] - 14s 26ms/step - loss: 0.0010 - accuracy: 0.9874 - val_loss: 0.0017 - val_accuracy: 0.9980
Epoch 42/80
557/557 [==============================] - 16s 28ms/step - loss: 0.0010 - accuracy: 0.9883 - val_loss: 0.0012 - val_accuracy: 0.9879
Epoch 43/80
557/557 [==============================] - 17s 30ms/step - loss: 9.9320e-04 - accuracy: 0.9874 - val_loss: 0.0016 - val_accuracy: 0.9838
Epoch 44/80
557/557 [==============================] - 14s 26ms/step - loss: 0.0010 - accuracy: 0.9883 - val_loss: 0.0011 - val_accuracy: 0.9939
Epoch 45/80
557/557 [==============================] - 14s 25ms/step - loss: 9.6059e-04 - accuracy: 0.9865 - val_loss: 9.4440e-04 - val_accuracy: 0.9939683e-04 - accuracy:
Epoch 46/80
557/557 [==============================] - 14s 26ms/step - loss: 9.4878e-04 - accuracy: 0.9890 - val_loss: 0.0013 - val_accuracy: 0.9838
Epoch 47/80
557/557 [==============================] - 14s 25ms/step - loss: 9.3766e-04 - accuracy: 0.9865 - val_loss: 0.0011 - val_accuracy: 0.9838
Epoch 48/80
557/557 [==============================] - 14s 26ms/step - loss: 9.0485e-04 - accuracy: 0.9876 - val_loss: 0.0020 - val_accuracy: 0.9859
Epoch 49/80
557/557 [==============================] - 15s 26ms/step - loss: 9.1148e-04 - accuracy: 0.9874 - val_loss: 0.0010 - val_accuracy: 0.9939
Epoch 50/80
557/557 [==============================] - 14s 25ms/step - loss: 8.8500e-04 - accuracy: 0.9863 - val_loss: 9.7739e-04 - val_accuracy: 0.9939
Epoch 51/80
557/557 [==============================] - 15s 26ms/step - loss: 9.0149e-04 - accuracy: 0.9874 - val_loss: 0.0010 - val_accuracy: 0.9939
Epoch 52/80
557/557 [==============================] - 14s 25ms/step - loss: 8.4779e-04 - accuracy: 0.9892 - val_loss: 0.0012 - val_accuracy: 0.9899
Epoch 53/80
557/557 [==============================] - 14s 25ms/step - loss: 8.7150e-04 - accuracy: 0.9881 - val_loss: 0.0012 - val_accuracy: 0.9980
Epoch 54/80
557/557 [==============================] - 14s 24ms/step - loss: 8.4016e-04 - accuracy: 0.9863 - val_loss: 0.0012 - val_accuracy: 0.9980
Epoch 55/80
###Markdown
Overall Learing results (Step 1 & Step 2)
###Code
loss_ges = np.append(loss_ges, history.history['loss'])
val_loss_ges = np.append(val_loss_ges, history.history['val_loss'])
plt.semilogy(loss_ges)
plt.semilogy(val_loss_ges)
plt.title('model loss')
plt.ylabel('loss')
plt.xlabel('epoch')
plt.legend(['train','eval'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Check the model by hand* The following code uses the trained model to check the deviation for each picture.* The evaluation takes the periodic character of the results into account (dev1 ... dev2).* Images, that have a bigger deviation as the parameter "deviation_max_list" are printed in a list to check the picture and labeling itself
###Code
Input_dir='data_resize_all'
#Input_dir='test_result'
files = glob.glob(Input_dir + '/*.*')
res = []
i = 0
deviation_max_list = 0.03
for aktfile in files:
base = os.path.basename(aktfile)
target = (float(base[0:3])) / 10
target_sin = math.sin(target * math.pi * 2)
target_cos = math.cos(target * math.pi * 2)
test_image = Image.open(aktfile)
test_image = np.array(test_image, dtype="float32")
img = np.reshape(test_image,[1,32,32,3])
classes = model.predict(img)
out_sin = classes[0][0]
out_cos = classes[0][1]
out_target = (np.arctan2(out_sin, out_cos)/(2*math.pi)) % 1
dev_sin = target_sin - out_sin
dev_cos = target_cos - out_cos
dev_target = target - out_target
if abs(dev_target + 1) < abs(dev_target):
out_target = out_target - 1
dev_target = target - out_target
else:
if abs(dev_target - 1) < abs(dev_target):
out_target = out_target + 1
dev_target = target - out_target
res.append(np.array([target, out_target, dev_target, out_sin, out_cos, i]))
if abs(dev_target) > deviation_max_list:
print(aktfile + " " + str(target) + " " + str(out_target) + " " + str(dev_target))
i+=1
res = np.asarray(res)
res_step_1 = res
###Output
data_resize_all\0.0_Lfd-0045_zeiger4_2020-04-29_12-43-02.jpg 0.0 -0.03851659169571775 0.03851659169571775
data_resize_all\0.1_Lfd-0054_zeiger2_2019-06-06T115009.jpg 0.01 -0.022695895669021082 0.032695895669021084
data_resize_all\0.1_Lfd-0055_zeiger2_2019-06-06T120009.jpg 0.01 -0.02146491059629485 0.03146491059629485
data_resize_all\0.2_Lfd-0094_zeiger2_2019-11-19_02-07-03.jpg 0.02 0.05053315931752626 -0.030533159317526263
data_resize_all\0.2_Lfd-0095_zeiger2_2019-11-19_10-57-03.jpg 0.02 0.05152062917359393 -0.031520629173593925
data_resize_all\0.2_Lfd-0121_zeiger4_2020-04-29_14-32-02.jpg 0.02 -0.012705196543143615 0.03270519654314362
data_resize_all\0.3_Lfd-0128_zeiger2_2019-06-02T110009.jpg 0.03 -0.0006796862352890232 0.030679686235289022
data_resize_all\0.3_Lfd-0144_zeiger4_2020-04-29_13-50-02.jpg 0.03 -0.010006096696428268 0.04000609669642827
data_resize_all\1.6_Lfd-0650_zeiger1_2020-06-15_17-13-42.jpg 0.16 0.12796782502292475 0.03203217497707525
data_resize_all\1.8_Lfd-0812_zeiger4_2020-04-29_13-08-01.jpg 0.18 0.21033093230361904 -0.03033093230361905
data_resize_all\1.8_Lfd-0813_zeiger4_2020-04-29_14-01-02.jpg 0.18 0.21606571091274332 -0.036065710912743326
data_resize_all\2.2_Lfd-1018_zeiger1_2019-06-01T185013.jpg 0.22000000000000003 0.2503391637946703 -0.030339163794670276
data_resize_all\2.2_Lfd-1019_zeiger1_2019-06-01T190020.jpg 0.22000000000000003 0.25186639409308514 -0.031866394093085115
data_resize_all\2.2_Lfd-1021_zeiger1_2019-06-01T192013.jpg 0.22000000000000003 0.25254595997000606 -0.03254595997000603
data_resize_all\2.2_Lfd-1025_zeiger1_2019-06-01T200012.jpg 0.22000000000000003 0.2569866454043852 -0.03698664540438518
data_resize_all\2.6_Lfd-1257_zeiger4_2019-11-19_18-42-04.jpg 0.26 0.29376321083514034 -0.033763210835140334
data_resize_all\3.3_Lfd-1615_zeiger2_2020-04-29_12-41-02.jpg 0.32999999999999996 0.2995216484606396 0.03047835153936035
data_resize_all\3.5_Lfd-1745_zeiger4_2020-04-29_11-27-02.jpg 0.35 0.3160977207208574 0.03390227927914258
data_resize_all\3.7_Lfd-1788_zeiger4_2020-04-29_14-19-02.jpg 0.37 0.33874547067101035 0.031254529328989644
data_resize_all\4.8_Lfd-2305_zeiger1_2019-06-02T091009.jpg 0.48 0.44884546390693864 0.031154536093061347
data_resize_all\5.1_Lfd-2486_zeiger3_2019-06-06T184009.jpg 0.51 0.5445132775445902 -0.034513277544590215
data_resize_all\5.1_Lfd-2488_zeiger3_2019-06-06T190009.jpg 0.51 0.545346295004864 -0.03534629500486397
data_resize_all\5.2_Lfd-2511_zeiger2_2019-06-02T180009.jpg 0.52 0.5637697816535887 -0.04376978165358869
data_resize_all\5.2_Lfd-2529_zeiger3_2019-06-05T034009.jpg 0.52 0.5547346556375143 -0.03473465563751432
data_resize_all\5.2_Lfd-2531_zeiger3_2019-06-05T040009.jpg 0.52 0.5524997797955907 -0.03249977979559071
data_resize_all\5.2_Lfd-2534_zeiger3_2019-06-05T174009.jpg 0.52 0.5518000648606183 -0.03180006486061826
data_resize_all\5.3_Lfd-2584_zeiger2_2019-06-03T135009.jpg 0.53 0.5622811419239844 -0.03228114192398435
data_resize_all\5.3_Lfd-2585_zeiger2_2019-06-03T140009.jpg 0.53 0.5636359858368473 -0.03363598583684724
data_resize_all\5.3_Lfd-2586_zeiger2_2019-06-03T141009.jpg 0.53 0.5608246420289746 -0.030824642028974614
data_resize_all\5.3_Lfd-2613_zeiger4_2019-11-19_06-22-03.jpg 0.53 0.5642870925922264 -0.0342870925922264
data_resize_all\5.3_Lfd-2614_zeiger4_2019-11-19_09-27-03.jpg 0.53 0.565669029588797 -0.03566902958879692
data_resize_all\5.4_Lfd-2624_zeiger3_2019-06-03T113009.jpg 0.54 0.5710441987381201 -0.031044198738120032
data_resize_all\5.5_Lfd-2644_zeiger4_2019-11-19_08-22-03.jpg 0.55 0.5837266457709813 -0.03372664577098128
data_resize_all\5.6_Lfd-2660_zeiger2_2019-06-03T153009.jpg 0.5599999999999999 0.5934619420679728 -0.033461942067972816
data_resize_all\6.0_Lfd-2934_zeiger2_2019-11-19_06-22-03.jpg 0.6 0.5676578668386185 0.03234213316138146
data_resize_all\6.2_Lfd-2997_zeiger1_2019-09-14_20-20-13.jpg 0.62 0.6522807232334638 -0.03228072323346376
data_resize_all\6.8_Lfd-3315_zeiger1_2019-11-19_02-27-03.jpg 0.6799999999999999 0.6458920678477971 0.03410793215220287
data_resize_all\7.9_Lfd-3932_zeiger4_2019-06-02T105009.jpg 0.79 0.822732628457831 -0.03273262845783098
data_resize_all\7.9_Lfd-3933_zeiger4_2019-06-04T180009.jpg 0.79 0.8264618927719103 -0.036461892771910254
data_resize_all\8.4_Lfd-4117_zeiger4_2019-06-05T044009.jpg 0.8400000000000001 0.8851518015486866 -0.04515180154868648
data_resize_all\8.5_Lfd-4150_zeiger4_2019-06-02T102009.jpg 0.85 0.8915089621731805 -0.041508962173180564
data_resize_all\8.5_Lfd-4151_zeiger4_2019-06-02T191009.jpg 0.85 0.8853406467928635 -0.03534064679286353
data_resize_all\8.5_Lfd-4152_zeiger4_2019-06-02T192009.jpg 0.85 0.8839218931793918 -0.03392189317939187
data_resize_all\8.5_Lfd-4153_zeiger4_2019-06-02T193009.jpg 0.85 0.8887473933688843 -0.03874739336888433
data_resize_all\8.5_Lfd-4154_zeiger4_2019-06-02T194008.jpg 0.85 0.8865641518598113 -0.03656415185981132
data_resize_all\8.5_Lfd-4155_zeiger4_2019-06-02T195009.jpg 0.85 0.8871898066741584 -0.03718980667415839
data_resize_all\8.5_Lfd-4156_zeiger4_2019-06-02T200009.jpg 0.85 0.8857354886451334 -0.03573548864513343
data_resize_all\8.5_Lfd-4162_zeiger4_2019-06-03T152009.jpg 0.85 0.8802426884601044 -0.030242688460104472
data_resize_all\8.5_Lfd-4172_zeiger4_2019-11-19_15-47-04.jpg 0.85 0.8854635332796806 -0.03546353327968066
data_resize_all\8.5_Lfd-4173_zeiger4_2019-11-19_15-52-04.jpg 0.85 0.893254122937988 -0.04325412293798803
data_resize_all\8.5_Lfd-4174_zeiger4_2019-11-19_15-57-04.jpg 0.85 0.8895945265534428 -0.039594526553442866
data_resize_all\8.5_Lfd-4175_zeiger4_2019-11-19_16-02-04.jpg 0.85 0.8962765385639617 -0.046276538563961744
data_resize_all\8.5_Lfd-4176_zeiger4_2019-11-19_16-07-04.jpg 0.85 0.8962828280298224 -0.04628282802982242
data_resize_all\8.5_Lfd-4177_zeiger4_2019-11-19_16-12-04.jpg 0.85 0.8887850163275625 -0.03878501632756248
data_resize_all\8.5_Lfd-4178_zeiger4_2019-11-19_16-17-04.jpg 0.85 0.8875028759838649 -0.037502875983864925
data_resize_all\8.5_Lfd-4179_zeiger4_2019-11-19_16-22-04.jpg 0.85 0.8817189674445858 -0.0317189674445858
data_resize_all\8.5_Lfd-4180_zeiger4_2019-11-19_16-27-04.jpg 0.85 0.891351070966867 -0.041351070966867076
data_resize_all\8.5_Lfd-4181_zeiger4_2019-11-19_16-32-04.jpg 0.85 0.8888170802711661 -0.03881708027116615
data_resize_all\8.5_Lfd-4182_zeiger4_2019-11-19_16-37-04.jpg 0.85 0.8937648029013064 -0.043764802901306465
data_resize_all\8.5_Lfd-4183_zeiger4_2019-11-19_16-42-04.jpg 0.85 0.8873981748757129 -0.03739817487571295
data_resize_all\8.5_Lfd-4186_zeiger4_2019-11-19_16-57-04.jpg 0.85 0.8924669151770225 -0.0424669151770225
data_resize_all\8.5_Lfd-4187_zeiger4_2019-11-19_17-07-04.jpg 0.85 0.8938480553181898 -0.043848055318189805
data_resize_all\8.5_Lfd-4188_zeiger4_2019-11-19_17-17-04.jpg 0.85 0.8919575727924165 -0.04195757279241652
data_resize_all\8.5_Lfd-4189_zeiger4_2019-11-19_17-22-04.jpg 0.85 0.8866375858797628 -0.036637585879762846
data_resize_all\8.6_Lfd-4201_zeiger4_2019-11-19_06-37-03.jpg 0.86 0.9031693757468616 -0.04316937574686164
data_resize_all\8.6_Lfd-4202_zeiger4_2019-11-19_06-42-03.jpg 0.86 0.8946030283808224 -0.03460302838082241
data_resize_all\8.6_Lfd-4203_zeiger4_2019-11-19_06-47-03.jpg 0.86 0.8960496055287881 -0.03604960552878811
data_resize_all\8.6_Lfd-4205_zeiger4_2019-11-19_17-12-04.jpg 0.86 0.8917681868249302 -0.03176818682493021
data_resize_all\9.5_Lfd-4675_zeiger4_2020-04-29_13-10-02.jpg 0.95 0.9107424523392256 0.03925754766077438
data_resize_all\9.8_Lfd-4808_zeiger2_2019-11-19_00-52-03.jpg 0.9800000000000001 1.013725567515307 -0.033725567515306865
data_resize_all\9.9_Lfd-4885_zeiger4_2020-04-29_11-52-02.jpg 0.99 0.9598438142792403 0.03015618572075973
data_resize_all\9.9_Lfd-4887_zeiger4_2020-04-29_14-40-02.jpg 0.99 0.9578679449024678 0.03213205509753214
###Markdown
Results
###Code
plt.plot(res[:,3])
plt.plot(res[:,4])
plt.title('Result')
plt.ylabel('value')
plt.xlabel('#Picture')
plt.legend(['sin', 'cos'], loc='lower left')
plt.show()
plt.plot(res[:,0])
plt.plot(res[:,1])
plt.title('Result')
plt.ylabel('Counter Value')
plt.xlabel('#Picture')
plt.legend(['Orginal', 'Prediction'], loc='upper left')
plt.show()
###Output
_____no_output_____
###Markdown
Deviation from Expected Value
###Code
plt.plot(res[:,2])
plt.title('Deviation')
plt.ylabel('Deviation from expected value')
plt.xlabel('#Picture')
plt.legend(['Deviation'], loc='upper left')
#plt.ylim(-0.3, 0.3)
plt.show()
statistic = np.array([np.mean(res[:,2]), np.std(res[:,2]), np.min(res[:,2]), np.max(res[:,2])])
print(statistic)
###Output
_____no_output_____
###Markdown
Save the model* Save the model to the file with the "h5" file format
###Code
# model.save("CNN_Analog-Readout_Version-6.2.0.h5")
converter = tf.lite.TFLiteConverter.from_keras_model(model)
tflite_model = converter.convert()
open("CNN_Analog-Readout_Version-6.2.0.tflite", "wb").write(tflite_model)
###Output
_____no_output_____ |
AI_Class/004/multivariate_regression1.ipynb | ###Markdown
다중회귀분석여러개의 독립 변수가 종속 변수에 영항을 주는 경우 다중회귀분석을 사용즉, 독립변수 X의 개수가 2개 이상일 경우 다중회귀분석이라 한다.y = b + a1X1 + a2X2 + a3X3 + ... + anXn와 같이 표현할 수 있다. Import library
###Code
import pandas as pd
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns
from sklearn.model_selection import train_test_split
from sklearn.linear_model import LinearRegression
###Output
_____no_output_____
###Markdown
Load data
###Code
df = pd.read_csv('../002/auto-mpg.csv', header=None)
###Output
_____no_output_____
###Markdown
EDA (Exploratory Data Analysis)
###Code
df.columns = ['mpg','cylinders','displacement','horsepower','weight',
'acceleration','model year','origin','name']
###Output
_____no_output_____
###Markdown
데이터 확인 데이터 자료형 확인 데이터 결측치 확인 데이터 통계 요약정보 확인 horsepower 컬럼에서 "?"라는 이상한값 처리해주기 Modeling
###Code
X=df[['weight', 'horsepower','cylinders']] #독립 변수 X
y=df['mpg'] #종속 변수 Y
###Output
_____no_output_____ |
ipynb/Germany-Nordrhein-Westfalen-LK-Düren.ipynb | ###Markdown
Germany: LK Düren (Nordrhein-Westfalen)* Homepage of project: https://oscovida.github.io* Plots are explained at http://oscovida.github.io/plots.html* [Execute this Jupyter Notebook using myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Düren.ipynb)
###Code
import datetime
import time
start = datetime.datetime.now()
print(f"Notebook executed on: {start.strftime('%d/%m/%Y %H:%M:%S%Z')} {time.tzname[time.daylight]}")
%config InlineBackend.figure_formats = ['svg']
from oscovida import *
overview(country="Germany", subregion="LK Düren", weeks=5);
overview(country="Germany", subregion="LK Düren");
compare_plot(country="Germany", subregion="LK Düren", dates="2020-03-15:");
# load the data
cases, deaths = germany_get_region(landkreis="LK Düren")
# get population of the region for future normalisation:
inhabitants = population(country="Germany", subregion="LK Düren")
print(f'Population of country="Germany", subregion="LK Düren": {inhabitants} people')
# compose into one table
table = compose_dataframe_summary(cases, deaths)
# show tables with up to 1000 rows
pd.set_option("max_rows", 1000)
# display the table
table
###Output
_____no_output_____
###Markdown
Explore the data in your web browser- If you want to execute this notebook, [click here to use myBinder](https://mybinder.org/v2/gh/oscovida/binder/master?filepath=ipynb/Germany-Nordrhein-Westfalen-LK-Düren.ipynb)- and wait (~1 to 2 minutes)- Then press SHIFT+RETURN to advance code cell to code cell- See http://jupyter.org for more details on how to use Jupyter Notebook Acknowledgements:- Johns Hopkins University provides data for countries- Robert Koch Institute provides data for within Germany- Atlo Team for gathering and providing data from Hungary (https://atlo.team/koronamonitor/)- Open source and scientific computing community for the data tools- Github for hosting repository and html files- Project Jupyter for the Notebook and binder service- The H2020 project Photon and Neutron Open Science Cloud ([PaNOSC](https://www.panosc.eu/))--------------------
###Code
print(f"Download of data from Johns Hopkins university: cases at {fetch_cases_last_execution()} and "
f"deaths at {fetch_deaths_last_execution()}.")
# to force a fresh download of data, run "clear_cache()"
print(f"Notebook execution took: {datetime.datetime.now()-start}")
###Output
_____no_output_____ |
lessons/3_introduction_to_plotting_data.ipynb | ###Markdown
Lesson Three: Introduction to Plotting DataCongrats, you've made it to week three! This week will be awesome for those of you who are more artistically inclined, as we will be exploring different ways to visualise data.  📲 Section One: Importing PackagesOnce again, we'll be importing pandas. This time around, we'll also import **matplotlib** and its **pyplot** submodule so that we can create some stunning plots today.
###Code
import matplotlib.pyplot as plt
import pandas as pd
florida_data = pd.read_csv('https://raw.githubusercontent.com/Sci-Teens/biology-program/main/data/florida_covid_data_the_atlantic.csv')
georgia_data = pd.read_csv('https://raw.githubusercontent.com/Sci-Teens/biology-program/main/data/georgia_covid_data_healthcare.csv')
# TODO: Examine the head of the Georgia dataset
# TODO: Examine the head of the Florida dataset
###Output
_____no_output_____
###Markdown
📊 Section Two: Plotting Quantitative DataFor starters, we'll start with some of the most common plots for some of the most common data types: quantitative data. This data deals with independent and dependent variables; we recommend you check out the video below on independent and dependent variables to learn more about them.
###Code
%%html
`<iframe width="560" height="315" src="https://www.youtube.com/embed/l0jTMDtX4WY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>`
###Output
_____no_output_____
###Markdown
Line Chart A **line chart** plots the relationship between two variables as a collection of lines connecting points. Line charts are very useful when dealing with data collected over a time period, and we want to view how that data changes over time
###Code
plt.plot(florida_data['date'], florida_data['death'])
plt.show()
###Output
_____no_output_____
###Markdown
HistogramA **histogram** tells us how much data falls into a certain range of numbers. Say we wanted to examine the frequency of all names. Think of a histogram as a collection of bar graphs whose heights are determined by how many values fall into certain **bins**. We can use a histogram instead of a line chart this time to view how the data is **distributed**.
###Code
plt.hist(florida_data['deathIncrease'])
plt.show()
###Output
_____no_output_____
###Markdown
Plotting Categorical DataThough you may not see this type of data as much as quantitative data in scientific datasets, it is nonetheless equally important to understand some of the best ways to visualize categorical data. As we mentioned for quantitative data, feel free to check out the previous notebook if you need a refresher for what categorical data is. Bar ChartA bar chart tells us how much of a categorical variable makes up a certain value. Let's try plotting how many COVID cases in Georgia were from each sex. Notice how we select the data below.
###Code
georgia_races = georgia_data.groupby('race').sum()
plt.bar(georgia_races.index, georgia_races['cases'])
plt.show()
###Output
_____no_output_____
###Markdown
Pie ChartA **pie chart** is useful for showing us what percentage of a total that a categorical variable makes up. For our pie chart, we'll go ahead and show the breakdown of COVID cases by sex for Hispanic/Latinx people in Georgia.
###Code
georgia_hisp = georgia_data[(georgia_data['ethnicity'] == 'Hispanic/ Latino')].groupby('sex').sum()
plt.pie(georgia_hisp['cases'], labels=georgia_hisp.index)
plt.show()
###Output
_____no_output_____
###Markdown
😍 Section Three: Making Breathtaking Plots There's one thing you may have noticed about the plots above: they all look extremely boring. And awful. There's not a whole lot going on, and the plots themselves don't tell us a lot about the data itself. What's being plotted? What does each **axis** represent? Adding LabelsOne of the most important things to do when plotting data is to label your plot. The plots that we've worked with so far today have had two dimensions: the **x-axis** and the **y-axis**.We can add a title using the ``.title()`` method, an x-axis label using the ``.xlabel()`` method, and a y-label using the ``.ylabel()`` method. Here's the before:
###Code
plt.plot(florida_data['date'], florida_data['death'])
plt.show()
###Output
_____no_output_____
###Markdown
And here's the after:
###Code
plt.plot(florida_data['date'], florida_data['death'])
plt.title('Date versus Number of COVID Deaths in Florida')
plt.xlabel('Date')
plt.ylabel('Cumulative Deaths')
plt.show()
###Output
_____no_output_____
###Markdown
One last thing we'll do is to limit the number of labels on each axis. As you can see above, there are way too many overlapping labels to be able to read each date. Using the plot from above, we can achieve this using the xticks() method.
###Code
plt.plot(florida_data['date'], florida_data['death'])
plt.title('Date versus Number of COVID Deaths in Florida')
plt.xlabel('Date')
plt.ylabel('Cumulative Deaths')
locs, labels = plt.xticks()
plt.xticks(locs[0::80], rotation=20)
plt.show()
###Output
_____no_output_____
###Markdown
Adding Plot Styles As you can tell, our plots are pretty, well, boring. They let us explore our data pretty well, but they're just not that visually appealing. To fix this, we can use plot styles. To achieve this, we must use ``plt.style.context()``. My personal favorites are the fivethirtyeight and seaborn styles. These styles are nods to two different organizations. [FiveThirtyEight](https://fivethirtyeight.com) is a website that discusses statistics for nearly every topic, especially politics, economics, and sports. Their unique style for creating plots can be used in python by calling `plt.style.use('fivethirtyeight')`.  [Seaborn](https://seaborn.pydata.org/) is another Python package that is capable of creating advanced data plots. The kind of plots you can create with the Seaborn library are pretty neat, though pretty tricky if you're just starting out. Luckily, we can use their unique and appealing style without having to use the package itself by calling `plt.style.use('seaborn')`.For a complete list of the available styles, be sure to check out [this website](https://matplotlib.org/3.2.1/gallery/style_sheets/style_sheets_reference.html). We'll start by changing our plot to the seaborn style.
###Code
with plt.style.context('seaborn'):
plt.plot(data['date'], data['death'])
plt.show()
###Output
_____no_output_____
###Markdown
Wow, our plots are already looking much better than before! What if we want to use the fivethirtyeight style?
###Code
with plt.style.context('fivethirtyeight'):
plt.plot(data['date'], data['death'])
plt.show()
###Output
_____no_output_____
###Markdown
Last but not least, let's use the ``.figure()`` method to tell matplotlib we want to make our figure bigger. Let's start by making our plot 20 inches by 5 inches.
###Code
with plt.style.context('seaborn'):
plt.figure(figsize=(20, 5))
plt.bar(georgia_races.index, georgia_races['cases'])
plt.title('COVID Cases in Georgia, by Race')
plt.xlabel('Race')
plt.ylabel('Cases')
plt.show()
###Output
_____no_output_____
###Markdown
✏️ PracticeGreat job today, we definitely threw a lot of information at you. That being said, make sure to practice to perfect your Python plotting skills. For today's practice assignment, we'll give you a lot of lee-way into determining which plot you want to create. Just be sure to justify why you're using that plot in particular. For example, if your data is primarily quantitative, and you want to compare two quantitative variables, you could use a histogram. Let's start by importing our data
###Code
plt.style.use('default')
###Output
_____no_output_____
###Markdown
**Question One** Create a line chart, showing the date on the x-axis and the increase in hospitalizations on the y-axis. Use the Florida dataset.
###Code
# TODO: Plot your data
###Output
_____no_output_____
###Markdown
**Question Two** Recreate the plot from above, but this time, label your axes and provide a title. For the date axis, use "Date". For the hospitalizations axis, use "Hospitalizations". For the plot title, use "Increase in Hospitalizations in Florida".
###Code
# TODO: Plot your data
###Output
_____no_output_____
###Markdown
**Question Three** Let's go ahead and create a bar chart, grouping Georgia COVID-19 cases by ethnicity. This problem involves two steps. First group your data into the variable *georgia_ethnicity*. Then, go ahead and create a bar chart of this data. Be sure to label your axes: for the counts axis, use the label "Cases". For the ethnicities axis, use the label "Ethnicities". For the title, use "COVID-19 Cases per Ethnicity in Georgia".
###Code
# TODO: Group the data into the variable "georgia_ethnicity"
# TODO" Plot the number of cases for each ethnicity in the "georgia_ethnicity" group
###Output
_____no_output_____
###Markdown
**Question Four** Finally, let's go ahead and create a pie chart for our data, showing the spread of cases between Females of each race. This problem involves two steps. First, sort and group your data into the variable *georgia_female_race*. Make sure to sum over each group to obtain the total number of cases. Then, go ahead and create a pie chart of this data. Be sure to add a title to your plot: use "Distribution of Female COVID-19 Cases per Race in Georgia". Also, be sure to include the labels for each race.
###Code
# TODO: Group the data into the variable "georgia_female_race"
# TODO" Plot the number of cases for each race in the "georgia_female_race" group
###Output
_____no_output_____
###Markdown
🏅ChallengeTime for the challenge question! This one will require you to do a bit of investigating the matplotlib package for yourself. For this, we're going to plot Florida COVID data on one plot. We'll plot both Florida daily death increases, as well as Florida hospitalization increases. Make sure to plot the deaths in the color black, and the hospitalizations in the color red. Furthermore, we'll go ahead and include a plot legend using ``plt.legend()``. Be sure to pass in the *label* argument when creating each line plot: for the death line plot, use the label "Deaths", and for the hospitalized line plot, use the label "Hospitalized" Finally, make sure to label your axes. Label the date axis with "Date", the count axis with "Count", and add the title "Deaths versus Hospitalizations in Florida"
###Code
# TODO: Create the unique data plot
###Output
_____no_output_____
###Markdown
Okay that's all we have for this week. Please feel free to reach out to us through email or attend our weekly Office Hours for questions or help on the practice problems. Again, we've attached a useful cheat sheet to show you how to perform some common tricks in matplotlib
###Code
%%html
<iframe id="fred" style="border:1px solid #666CCC" title="PDF in an i-Frame" src="https://s3.amazonaws.com/assets.datacamp.com/blog_assets/Python_Matplotlib_Cheat_Sheet.pdf" frameborder="1" scrolling="auto" height="850" width="1100" ></iframe>
###Output
_____no_output_____
###Markdown
Lesson Three: Introduction to Plotting DataCongrats, you've made it to week three! This week will be awesome for those of you who are more artistically inclined, as we will be exploring different ways to visualise data.  Section One: Importing PackagesLet's jump right back in and import the necessary packages. Once again, we'll use Pandas in addition to **Matplotlib**. Matplotlib's pyplot package will allow us to create all the necessary plots for today.
###Code
import matplotlib.pyplot as plt
import pandas as pd
data = pd.read_csv('https://raw.githubusercontent.com/Sci-Teens/ecology-program/main/data/dsny_data.csv')
# TODO: Examine the head of the dataset
###Output
_____no_output_____
###Markdown
Plotting Quantitative DataFor starters, we'll start with some of the most common plots for some of the most common data types: quantitative data. This data deals with independent and dependent variables; we recommend you check out the video below on independent and dependent variables to learn more about them.
###Code
%%html
`<iframe width="560" height="315" src="https://www.youtube.com/embed/l0jTMDtX4WY" frameborder="0" allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture" allowfullscreen></iframe>`
###Output
_____no_output_____
###Markdown
Line Chart A **line chart** plots the relationship between two variables as a collection of lines connecting points. Line charts are very useful when dealing with data collected over a time period, and we want to view how that data changes over time. Below, we'll plot the mean relative humidity over the first 200 values from the dataset.
###Code
plt.plot(data['time'][:200], data['relative_humidity_mean'][:200])
plt.show()
###Output
_____no_output_____
###Markdown
HistogramA **histogram** tells us how much data falls into a certain range of numbers. Say we wanted to examine the frequency of all names. Think of a histogram as a collection of bar graphs whose heights are determined by how many values fall into certain **bins**. We can use a histogram instead of a line chart this time to view how the data is **distributed**. Let's plot the mean temperatures over the course of the two years.
###Code
plt.hist(data['temperature_mean'])
plt.show()
###Output
_____no_output_____
###Markdown
ScatterplotA **scatterplot** plots the relationship between two variables as a collection of points. This type of plot is very similar to a line plot, and is used for many of the same purposes. It's mostly a matter of opinion, so we'll show you how to go about plotting the CO2 flux storage data over the last 200 samples taken.
###Code
plt.scatter(data['time'][-200:], data['CO2_flux_storage'][-200:])
plt.plot()
###Output
_____no_output_____
###Markdown
Making Breathtaking Plots There's one thing you may have noticed about the plots above: they all look extremely boring. And awful. There's not a whole lot going on, and the plots themselves don't tell us a lot about the data itself. What's being plotted? What does each **axis** represent? Adding LabelsOne of the most important things to do when plotting data is to label your plot. The plots that we've worked with so far today have had two dimensions: the **x-axis** and the **y-axis**.We can add a title using the ``.title()`` method, an x-axis label using the ``.xlabel()`` method, and a y-label using the ``.ylabel()`` method. Here's the before:
###Code
plt.plot(data['time'][:200], data['relative_humidity_mean'][:200])
plt.show()
###Output
_____no_output_____
###Markdown
And here's the after:
###Code
plt.plot(data['time'][:200], data['relative_humidity_mean'][:200])
plt.title('Date versus Mean Relative Humidity at the DSNY NEON Site')
plt.xlabel('Date')
plt.ylabel('Relative Humidity (%)')
plt.show()
###Output
_____no_output_____
###Markdown
One last thing we'll do is to limit the number of labels on each axis. As you can see above, there are way too many overlapping labels to be able to read each date. Using the plot from above, we can achieve this using the `xticks()` method.
###Code
plt.plot(data['time'][:200], data['relative_humidity_mean'][:200])
plt.title('Date versus Mean Relative Humidity at the DSNY NEON Site')
plt.xlabel('Date')
plt.ylabel('Relative Humidity (%)')
locs, labels = plt.xticks()
plt.xticks(locs[0::80], rotation=20)
plt.show()
###Output
_____no_output_____
###Markdown
Adding Plot Styles As you can tell, our plots are pretty, well, boring. They let us explore our data pretty well, but they're just not that visually appealing. To fix this, we can use plot styles. To achieve this, we must use ``plt.style.context()``. My personal favorites are the fivethirtyeight and seaborn styles. These styles are nods to two different organizations. [FiveThirtyEight](https://fivethirtyeight.com) is a website that discusses statistics for nearly every topic, especially politics, economics, and sports. Their unique style for creating plots can be used in python by calling `plt.style.use('fivethirtyeight')`. [Seaborn](https://seaborn.pydata.org/) is another Python package that is capable of creating advanced data plots. The kind of plots you can create with the Seaborn library are pretty neat, though pretty tricky if you're just starting out. Luckily, we can use their unique and appealing style without having to use the package itself by calling `plt.style.use('seaborn')`.For a complete list of the available styles, be sure to check out [this website](https://matplotlib.org/3.2.1/gallery/style_sheets/style_sheets_reference.html). We'll start by changing our plot to the seaborn style.
###Code
with plt.style.context('seaborn'):
plt.plot(data['time'][:200], data['relative_humidity_mean'][:200])
plt.show()
###Output
_____no_output_____
###Markdown
Wow, our plots are already looking much better than before! What if we want to use the fivethirtyeight style?
###Code
with plt.style.context('fivethirtyeight'):
plt.plot(data['time'][:200], data['relative_humidity_mean'][:200])
plt.show()
###Output
_____no_output_____
###Markdown
Last but not least, let's use the ``.figure()`` method to tell matplotlib we want to make our figure bigger. Let's start by making our plot 10 inches by 5 inches.
###Code
with plt.style.context('seaborn'):
plt.figure(figsize=(10, 5))
plt.hist(data['temperature_mean'])
plt.title('Histogram of Mean Temperatures at the DSNY NEON Site')
plt.xlabel('Mean Temperature (°Celsius)')
plt.ylabel('Count')
plt.show()
###Output
_____no_output_____
###Markdown
PracticeGreat job today, we definitely threw a lot of information at you. That being said, make sure to practice to perfect your Python plotting skills. **Question One** Create a line chart, showing the date on the x-axis and the last 1000 CO2 flux storage values from the dataset on the y-axis.
###Code
# TODO: Plot your data
###Output
_____no_output_____
###Markdown
**Question Two** Recreate the plot from above, but this time, label your axes and provide a title. For the date axis, use "Date". For the precipitation axis, use "CO2 Flux in Storage". For the plot title, use "CO2 Flux in Storage for the DSNY NEON Site".
###Code
# TODO: Plot your data
###Output
_____no_output_____
###Markdown
**Question Three** Let's go ahead and create a histogram, showing the amount of precipitation at the DSNY site.
###Code
# TODO: Create a histogram of the DSNY precipitation data
###Output
_____no_output_____
###Markdown
Yikes, only one column in our plot? Let's try making more bins using the `bins` argument. Let's try creating 20 bins. Furthermore, let's limit our range to be between one and three [0,3]. *Remember*: use `shift + tab` to see how to achieve this.
###Code
# TODO: Recreate the histogram with 100 bins
###Output
_____no_output_____
###Markdown
🏅ChallengeTime for the challenge question! This one will require you to do a bit of investigating the matplotlib package for yourself. For this, we're going to plot our DSNY data onto one plot. We'll plot the mean, minimum, and maximum temperature values. Furthermore, we'll be plotting this data **only** for the day of July 11, 2020.Make sure to plot the minimums in the color red, maximums in the color purple, and the mean in the color green. Furthermore, we'll go ahead and include a plot legend using ``plt.legend()``. Be sure to pass in the *label* argument when creating each line plot: for the minimum, maximum, and mean line plots, use "Minimum", "Maximum", and "Mean", respectively.Finally, make sure to label your axes. Label the date axis with "Date", the temperature axis with "Temperature (° Celsius)", and add the title "Minimum, Maximum, and Mean Temperatures at the DSNY NEON Site for July 11, 2020"
###Code
# TODO: Divide the dataset such that only dates from June 2020
# to July 2020 are present
# TODO: Create the unique data plot
###Output
_____no_output_____
###Markdown
Introduction to Plotting DataCongrats, you've made it to week three! This week will be awesome for those of you who are more artistically inclined, as we will be exploring different ways to visualise data.  Importing PackagesLet's go ahead and import the necessary packages. As we did last time, we'll go ahead and import NumPy and Pandas again. However, we'll also be importing **Matplotlib**. Matplotlib is a package that allows us to plot data from either NumPy or Pandas. As we'll soon see, many of the plotting techniques within Matplotlib are dead simple. Even better, we can customize our plots to our liking; we can set the color, title, labels, width, and so on for each of our plots! You may notice a new special command ``%matplotlib inline`` below where we import our packages. This is called a **magic command**, and is used to tell Jupyter Notebooks to do certain things. In this case, we're simply telling Jupyter to show our graphs below our code.
###Code
import matplotlib.pyplot as plt
import pandas as pd
import numpy as np
%matplotlib inline
###Output
_____no_output_____
###Markdown
Getting our DataFor this lesson, we'll be working with some non-scientific data: baby names. The data we'll download below includes information on the State, the Sex, the Year, the Name, and the Count of the total number of babies born with that name. Don't just take my word for it though; try checking out the first five and last five values of the dataset below. **NOTE:** Shoutout to the United States Social Security Administration, as well as the University of California, Berkley, for the data!
###Code
data = pd.read_csv('https://raw.githubusercontent.com/Sci-Teens/course-one/main/data/baby_names.csv')
# TODO: Examine the first five values
# TODO: Examine the last five values
###Output
_____no_output_____
###Markdown
Plotting Quantitative DataFor starters, we'll start with some of the most common plots for some of the most common data types: quantitative data. We discussed quantitative data in the last notebook, so be sure to [check it out](https://colab.research.google.com/github/Sci-Teens/course-one/blob/main/lessons/2_introduction_to_data_processing.ipynb) if you need a refresher. The plots that we'll be creating today will have two **axis**, or dimensions in which our data is arranged. The plots below have two axis, the **x-axis** and the **y-axis**. The x-axis refers to the axis that is horizontal on the plot, whereas the y-axis is vertical on the plot. The most important concept with plot axis is where to place each data. In general, **independent** data lies on the x-axis, whereas **dependent** data lies on the y-axis. We recommend you check out [this tutorial](https://www.khanacademy.org/math/cc-sixth-grade-math/cc-6th-equations-and-inequalities/cc-6th-dependent-independent/a/dependent-and-independent-variables-review) or [this video](https://www.youtube.com/watch?v=l0jTMDtX4WY) on independent and dependent variables to learn more about them. Before we get into the bulk of the different types of plotting, we should consider the benefits of visualizing our data in the first place. Often times, as scientists, we will be working the data that a lot of the public finds boring or difficult to interpret. If we want to effectively communicate our results, we'll need to pick which visualization is the best for your data. While we won't go into the specifics of how to decide which type of plot is the best for your data, we do want to introduce you into some of the common ways you'll see data visualized. Line Chart Arguably the most common plot that you'll encounter or use, a **line chart** simply plots the relationship between two variables as a collection of lines connecting points. Line charts are very useful when dealing with data collected over a time period, and we want to view how that data changes over time. For example, say we wanted to view the popularity of the name "Olivia" from 2015 to 2019 in the state of New York. We could do so using the code below. **Note:** Even though the *Name* column contains categorical data, we are instead plotting the *Count*, or total amount of people named Olivia between2015 and 2019. Thus, we are only plotting quantitative data.
###Code
olivias_ny = data.loc[(data['Name'] == 'Olivia') & (data['State'] == 'NY')]
plt.plot(olivias_ny['Year'], olivias_ny['Count'])
plt.show()
###Output
_____no_output_____
###Markdown
HistogramA **histogram** tells us how much data falls into a certain range of numbers. Say we wanted to examine the frequency of all names. Think of a histogram as a collection of bar graphs whose heights are determined by how many values fall into certain **bins**. We can use a histogram instead of a line chart this time to view how the data is **distributed**. If you want to be able to look at how the data is spread out over many values, histograms are definitely the way to go. We'll go more into depth about why histograms are useful in our lesson covering statistical testing, but just know that histograms are especially useful for telling us about the general shape of our data.
###Code
plt.hist(data['Count'])
plt.show()
###Output
_____no_output_____
###Markdown
Woah, it looks like there's only one bar on the graph. However, this isn't the case; It just so happens the more babies are named unpopular names then there are babies that are named popular names. Sound confusing? Let's check out how many babies were named with names in which less than 100 other babies were named.
###Code
unique_names = data[data['Count'] < 100]
plt.hist(unique_names['Count'])
plt.show()
###Output
_____no_output_____
###Markdown
As you can see, there were over 250,000 babies with names that fewer than 10 other babies were also named! We can confirm this by taking the mean of the counts column for our data
###Code
np.mean(data['Count'])
###Output
_____no_output_____
###Markdown
As you can see, on average, there were only roughly 32 other babies with the same name for any given name. Who would've thought? As we previously mentioned, we'll cover next lesson why histograms are so useful for capturing key statistical insights into our data. Plotting Categorical DataThough you may not see this type of data as much as quantitative data in scientific datasets, it is nonetheless equally important to understand some of the best ways to visualize categorical data. As we mentioned for quantitative data, feel free to check out the previous notebook if you need a refresher for what categorical data is. Bar ChartA bar chart tells us how much of a categorical variable makes up a certain value. Say we wanted to plot the top fifteen girl girl names in the state of Wyoming for 2016. We could do so with the code below
###Code
girls_wyoming_2016 = data.loc[(data['Year'] == 2016) & (data['State'] == 'WY') & (data['Sex'] == 'F')]
plt.bar(girls_wyoming_2016['Name'][:15], girls_wyoming_2016['Count'][:15])
plt.show()
###Output
_____no_output_____
###Markdown
Well, it worked (sorta...) The thing is, we can't read any of the names on the x-axis! No worries, we'll show you how to clean this up in the section "Making Plots Dope." Pie ChartA **pie chart** is useful for showing us what percentage of a total that a categorical variable makes up. We'll use the same data as above (girl names in Wyoming for the year of 2016), but this time we'll only use the top four names.
###Code
plt.pie(girls_wyoming_2016['Count'][0:4], labels=girls_wyoming_2016['Name'][0:4])
plt.show()
###Output
_____no_output_____
###Markdown
If we were to use all of the names, we see that the plot quickly becomes unreadable, and doesn't tell us much about how less common names compare to one another since all the wedges look nearly identical
###Code
plt.pie(girls_wyoming_2016['Count'], labels=girls_wyoming_2016['Name'])
plt.show()
###Output
_____no_output_____
###Markdown
Making Plots Awesome There's one thing you may have noticed about the plots above: they all look extremely boring. And awful. There's not a whole lot going on, and the plots themselves don't tell us a lot about the data itself. What's being plotted? What does each **axis** represent? Luckily, we can easily customize these plots to make them look fire. Adding LabelsOne of the most important things to do when plotting data is to label your plot. The plots that we've worked with so far today have had two dimensions: the **x-axis** and the **y-axis**. Going back to the first line plot that we created, we can see that there's no clear definition of what our data represents. To fix this, we'll add axis labels and a title. We can add a title using the ``.title()`` method, an x-axis label using the ``.xlabel()`` method, and a y-label using the ``.ylabel()`` method. Here's the before:
###Code
olivias_ny = data.loc[(data['Name'] == 'Olivia') & (data['State'] == 'NY')]
plt.plot(olivias_ny['Year'], olivias_ny['Count'])
plt.show()
###Output
_____no_output_____
###Markdown
And here's the after:
###Code
olivias_ny = data.loc[(data['Name'] == 'Olivia') & (data['State'] == 'NY')]
plt.plot(olivias_ny['Year'], olivias_ny['Count'])
plt.title('Olivias Born in New York Between 2015 and 2019')
plt.xlabel('Year')
plt.ylabel('Number of Olivias')
plt.show()
###Output
_____no_output_____
###Markdown
Looks much better already. Remember the Wyoming names data we had before? Let's try cleaning that up. For starters, we can go ahead and add a title and axis labels. We also want to rotate the names on the x-axis so that they don't overlap one another. To do this, we can use the ``.xticks()`` method, which allows us to specify how we want our "ticks to appear." in our case, we want to rotate them 90 degrees so that they appear vertical. The code to do that is provided below. Go ahead and set the title, x-axis, and y-axis yourself.
###Code
girls_wyoming_2016 = data.loc[(data['Year'] == 2016) & (data['State'] == 'WY') & (data['Sex'] == 'F')]
plt.bar(girls_wyoming_2016['Name'][:15], girls_wyoming_2016['Count'][:15])
# TODO: Add a title, as well as axis labels
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Our plot is looking much better, we can actually read the names now! Adding Plot Styles As you can tell, our plots are pretty, well, boring. They let us explore our data pretty well, but they're just not that visually appealing. To fix this, we can use plot styles. To achieve this, we must use ``plt.style.use``. My personal favorites are the fivethirtyeight and seaborn styles. These styles are nods to two different organizations. [FiveThirtyEight](https://fivethirtyeight.com) is a website that discusses statistics for nearly every topic, especially politics, economics, and sports. Their unique style for creating plots can be used in python by calling `plt.style.use('fivethirtyeight')`. [Seaborn](https://seaborn.pydata.org/) is another Python package that is capable of creating advanced data plots. The kind of plots you can create with the Seaborn library are pretty neat, though pretty tricky if you're just starting out. Luckily, we can use their unique and appealing style without having to use the package itself by calling `plt.style.use('seaborn')`.For a complete list of the available styles, be sure to check out [this website](https://matplotlib.org/3.2.1/gallery/style_sheets/style_sheets_reference.html). We'll start by changing our plot to the seaborn style.**Note:** when you call ``plt.style.use()``, it sets all plots created after that code to the same style as well. To stop this from occuring, just call ``plt.style.use('default')`` in a follow-up cell.
###Code
plt.style.use('seaborn')
olivias_ny = data.loc[(data['Name'] == 'Olivia') & (data['State'] == 'NY')]
plt.plot(olivias_ny['Year'], olivias_ny['Count'])
plt.title('Olivias Born in New York Between 2015 and 2019')
plt.xlabel('Year')
plt.ylabel('Number of Olivias')
plt.show()
###Output
_____no_output_____
###Markdown
Wow, our plots are already looking much better than before! What if we want to use the fivethirtyeight style?
###Code
plt.style.use('fivethirtyeight')
olivias_ny = data.loc[(data['Name'] == 'Olivia') & (data['State'] == 'NY')]
plt.plot(olivias_ny['Year'], olivias_ny['Count'])
plt.title('Olivias Born in New York Between 2015 and 2019')
plt.xlabel('Year')
plt.ylabel('Number of Olivias')
plt.show()
###Output
_____no_output_____
###Markdown
How about we try this out for our girls names in Wyoming plot? Let's see if it helps.
###Code
plt.style.use('seaborn')
girls_wyoming_2016 = data.loc[(data['Year'] == 2016) & (data['State'] == 'WY') & (data['Sex'] == 'F')]
plt.bar(girls_wyoming_2016['Name'][:15], girls_wyoming_2016['Count'][:15])
plt.title('Girl Names in Wyoming for 2016')
plt.xlabel('Name')
plt.ylabel('Count')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
It looks good, though it seems a bit large. Let's use the ``.figure()`` method to tell matplotlib we want to make our figure bigger. Let's start by making our plot 10 inches by 10 inches. **Note** you must put the ``.figure()`` method at the top of your code, before you plot a chart, in order for it to work.
###Code
plt.figure(figsize=(5,5))
girls_wyoming_2016 = data.loc[(data['Year'] == 2016) & (data['State'] == 'WY') & (data['Sex'] == 'F')]
plt.bar(girls_wyoming_2016['Name'][:15], girls_wyoming_2016['Count'][:15])
plt.title('Girl Names in Wyoming for 2016')
plt.xlabel('Name')
plt.ylabel('Count')
plt.xticks(rotation=90)
plt.show()
###Output
_____no_output_____
###Markdown
Now, data science wouldn't be all that useful if we weren't able to capture interesting insigts into our data. Say we wanted to examine how girl names have changed between 2016 and 2019 in Wyoming. Try plotting the top fifteen girl names in Wyoming in 2019 below, and compare it to the plot above. Try and make it identical in style to the plot above.
###Code
# TODO: Plot the top fifteen girl names in Wyoming
plt.figure(figsize=(5,5))
girls_wyoming_2019 = # Find the girls names from 2019 in Wyoming
plt.bar()
plt.show()
###Output
_____no_output_____
###Markdown
Great work! There are plenty of other things that we can do to style our plots. In the sake of keeping this lesson relatively simple, we'll avoid discussing these topics. If you're interested in seeing how else you can customize your plots, or what other plots you can create, be sure to check out the [matplotlib plot gallery](https://matplotlib.org/gallery/index.html) for inspiration. PracticeGreat job today, we definitely threw a lot of information at you. That being said, make sure to practice to perfect your Python plotting skills. For today's practice assignment, we'll give you a lot of lee-way into determining which plot you want to create. Just be sure to justify why you're using that plot in particular. For example, if your data is primarily quantitative, and you want to compare two quantitative variables, you could use a histogram. Let's start by importing our data
###Code
variable_name = "Ebola Virus Outbreak" #@param ["Wildfires and Bird Migration", "Yearly Carbon Fluctuations", "Ebola Virus Outbreak"]
lessons = {
"Ebola Virus Outbreak": "https://raw.githubusercontent.com/Sci-Teens/course-one/main/data/ebola.csv",
"Wildfires and Bird Migration": "https://raw.githubusercontent.com/Sci-Teens/course-one/main/data/bird_counts_grsm.csv",
"Mosquito Counts": "https://raw.githubusercontent.com/Sci-Teens/course-one/main/data/mosquito.csv"
}
dataset = lessons[variable_name]
plt.style.use('default')
data = pd.read_csv(dataset)
data.head()
###Output
_____no_output_____
###Markdown
**Question One** What columns of your dataset do you plan on plotting. What values from these columns will you keep in particular (do you plan on sorting the columns, or will you plot the entire column?) What plot (from the ones that we learned today) will you use to examine the data? Why do you think this plot is the best plot to use for this data?
###Code
# Write your answer here
###Output
_____no_output_____
###Markdown
**Question Two** Go ahead and plot your data. For now, there's no need to include any axis labels or titles
###Code
# TODO: Plot your data
###Output
_____no_output_____
###Markdown
**Question Three** Go ahead and plot your data again. This time, set the x-axis and y-axis labels for your data, as well as the title for your plot.
###Code
# TODO: Plot your data, but with x-axis and y-axis labels and a title
###Output
_____no_output_____
###Markdown
**Question Four** Go ahead replot your data with all labels and titles, but this time, using a different style
###Code
# TODO: Choose a different plot style for your data
###Output
_____no_output_____
###Markdown
ChallengeTime for the challenge question! This one will require you to do a bit of investigating the matplotlib package for yourself. For this, we're to create one of the plots from the [Matplotlib plot gallery](https://matplotlib.org/gallery/index.html) that we haven't already explored in today's lesson. Choose a plot from the gallery, and explain why you're using it to explore your data. Then, go ahead and plot the data. Be sure to include axis labels, as well as a title. You don't have to use the same data columns that you used for questions one and four!
###Code
# TODO" Use a unique plot for your data
###Output
_____no_output_____
###Markdown
For practice, take a look at the graph below. Can you tell how many variables we are representing? *Hint: look at the size, color, and location of our data points* 
###Code
#Answer: 4 variables. X, Y, Area, Color
###Output
_____no_output_____ |
notebooks/inhour-equation.ipynb | ###Markdown
The inhour equation
###Code
import numpy as np
from matplotlib import pyplot as plt
step = 0.0001
s = np.arange(-5,2,step)
l = 0.9
#beta = np.array([0.1])
#dc_lambda = np.array([1])
beta = np.array([0.1,0.1,0.1,0.1,0.1,0.1])
dc_lambda = np.array([0.8,0.7,0.6,0.5,0.4,0.3])
# Checks
assert len(beta)==len(dc_lambda)
for i in range(len(s)):
if (s*l)[i]-1 == 0:
np.delete(s,i)
for j in range(len(beta)):
if s[i]+dc_lambda[j] == 0:
np.delete(s,i)
def sum_j(beta,dc_lambda,s):
total = 0
for j in range(len(beta)):
print(j)
total += beta[j]*s/(s+dc_lambda[j])
return total
A = sum_j(beta,dc_lambda,s)
markersize = np.ones_like(s)/3
xlims = (-1.5,0)
ylims = (-2,2)
fig,ax = plt.subplots(figsize=(10,8))
ax.scatter(s,A,markersize)
ax.set_xlim(xlims[0],xlims[1])
ax.set_ylim(ylims[0],ylims[1])
y_asymptote = np.arange(ylims[0],ylims[1],0.01)
for j in range(len(beta)):
ax.plot(-1*np.ones_like(y_asymptote)*dc_lambda[j],y_asymptote,'--',color='0.5',dashes=(10, 5))
ax.plot(np.arange(ylims[0]))
plt.show()
###Output
_____no_output_____
###Markdown
6 groups$$ \rho_0 = \frac{s\ell}{s\ell + 1} + \frac{1}{s\ell + 1}\sum_{j=1}^6 \frac{\beta_j s}{(s+\lambda_j)} $$
###Code
rho_0 =(s*l+sum_j(beta,dc_lambda,s))/((s*l)+1)
xlims = (-2,0.5)
ylims = (-5,5)
fig,ax = plt.subplots(figsize=(10,8))
ax.scatter(s,rho_0,markersize)
ax.set_xlim(xlims[0],xlims[1])
ax.set_ylim(ylims[0],ylims[1])
y_asymptote = np.arange(ylims[0],ylims[1],0.01)
for j in range(len(beta)):
ax.plot(-1*np.ones_like(y_asymptote)*dc_lambda[j],y_asymptote,'--',color='0.5',dashes=(10, 5))
ax.plot(-1*np.ones_like(y_asymptote)/l,y_asymptote,'--',color='0.5',dashes=(10, 5))
ax.plot(np.arange(ylims[0]))
plt.show()
###Output
_____no_output_____ |
Jef_Ntungila_DS_Sprint_Challenge_8_Regression_2.ipynb | ###Markdown
_Lambda School Data Science, Unit 2_ Regression 2 Sprint Challenge: Predict drugstore sales 🏥For your Sprint Challenge, you'll use real-world sales data from a German drugstore chain, from Jan 2, 2013 — July 31, 2015.You are given three dataframes:- `train`: historical sales data for 100 stores- `test`: historical sales data for 100 different stores- `store`: supplemental information about the storesThe train and test set do _not_ have different date ranges. But they _do_ have different store ids. Your task is _not_ to forecast future sales from past sales. **Your task is to predict sales at unknown stores, from sales at known stores.**
###Code
import pandas as pd
train = pd.read_csv('https://drive.google.com/uc?export=download&id=1E9rgiGf1f_WL2S4-V6gD7ZhB8r8Yb_lE')
test = pd.read_csv('https://drive.google.com/uc?export=download&id=1vkaVptn4TTYC9-YPZvbvmfDNHVR8aUml')
store = pd.read_csv('https://drive.google.com/uc?export=download&id=1rZD-V1mWydeytptQfr-NL7dBqre6lZMo')
assert train.shape == (78400, 7)
assert test.shape == (78400, 7)
assert store.shape == (200, 10)
###Output
_____no_output_____
###Markdown
The dataframes have a variety of columns:- **Store** - a unique Id for each store- **DayOfWeek** - integer, 1-6- **Date** - the date, from Jan 2, 2013 — July 31, 2015.- **Sales** - the units of inventory sold on a given date (this is the target you are predicting)- **Customers** - the number of customers on a given date- **Promo** - indicates whether a store is running a promo on that day- **SchoolHoliday** - indicates the closure of public schools- **StoreType** - differentiates between 4 different store models: a, b, c, d- **Assortment** - describes an assortment level: a = basic, b = extra, c = extended- **CompetitionDistance** - distance in meters to the nearest competitor store- **CompetitionOpenSince[Month/Year]** - gives the approximate year and month of the time the nearest competitor was opened- **Promo2** - Promo2 is a continuing and consecutive promotion for some stores: 0 = store is not participating, 1 = store is participating- **Promo2Since[Year/Week]** - describes the year and calendar week when the store started participating in Promo2- **PromoInterval** - describes the consecutive intervals Promo2 is started, naming the months the promotion is started anew. E.g. "Feb,May,Aug,Nov" means each round starts in February, May, August, November of any given year for that store This Sprint Challenge has three parts. To demonstrate mastery on each part, do all the required instructions. To earn a score of "3" for the part, also do the stretch goals. 1. Wrangle relational data, Log-transform the target- Merge the `store` dataframe with the `train` and `test` dataframes. - Arrange the X matrix and y vector for the train and test sets.- Log-transform the target for the train and test set.- Plot the target's distribution for the train set, before and after the transformation. Stretch goals- Engineer 3+ more features. Exploring Data
###Code
store.head() #exploring data
train['Date'] = pd.to_datetime(train['Date'])
test['Date'] = pd.to_datetime(test['Date'])
#splitting the store data in half
from sklearn.model_selection import train_test_split
train_store, test_store = train_test_split(store, test_size = 0.5, random_state=42)
train_store.shape, test_store.shape
###Output
_____no_output_____
###Markdown
Merging store dataframe with the other dataframes
###Code
train = train.merge(train_store, how='left') #merging original train with new data from store
test = test.merge(test_store, how='left') #merging original test data with new data from store
train.shape, test.shape
###Output
_____no_output_____
###Markdown
Arranging x and y matrix
###Code
train.columns.values
train.dtypes
train.isnull().sum()
target = 'Store'
feature = ['DayOfWeek', 'Sales', 'Customers', 'Promo',
'SchoolHoliday', 'StoreType', 'Assortment', 'CompetitionDistance',
'CompetitionOpenSinceMonth', 'CompetitionOpenSinceYear', 'Promo2',
'Promo2SinceWeek', 'Promo2SinceYear', 'PromoInterval']
X_train = train[feature]
y_train = train[target]
X_val = val[feature]
y_val = val[target]
X_test = test[feature]
y_test = test[target]
train.describe()
train.describe(exclude = 'number')
###Output
_____no_output_____
###Markdown
Log-transform the target for the train and test set.
###Code
import numpy as np
y_train_log = np.log1p(y_train)
y_test_log = np.log1p(y_test)
y_val_log = np.log1p(y_val)
###Output
_____no_output_____
###Markdown
Plot the target's distribution for the train set, before and after the transformation.
###Code
%matplotlib inline
import seaborn as sns
sns.distplot(y_train);
sns.distplot(y_train_log);
###Output
_____no_output_____
###Markdown
2. Fit and validate your model- **Use Gradient Boosting** or any type of regression model.- **Beat the baseline:** The estimated baseline Root Mean Squared Logarithmic Error is 0.90, if we guessed the mean sales for every prediction. Remember that RMSE with the log-transformed target is equivalent to RMSLE with the original target. Try to get your error below 0.20.- **To validate your model, choose any one of these options:** - Split the train dataframe into train and validation sets. Put all dates for a given store into the same set. Use xgboost `early_stopping_rounds` with the validation set. - Or, use scikit-learn `cross_val_score`. Put all dates for a given store into the same fold. - Or, use scikit-learn `RandomizedSearchCV` for hyperparameter optimization. Put all dates for a given store into the same fold.- **Get the Validation Error** (multiple times if you try multiple iterations) **and Test Error** (one time, at the end). Stretch goal- Optimize 3+ hyperparameters by searching 10+ "candidates" (possible combinations of hyperparameters). To validate your model, choose any one of these options: Split the train dataframe into train and validation sets.
###Code
#splitting train to get validation set
train, val = train_test_split(train, test_size=0.5, random_state=42) #not ideal, but the test size is so big already
train.shape, val.shape, test.shape
###Output
_____no_output_____
###Markdown
Regression model
###Code
!pip install category_encoders
import category_encoders as ce
from sklearn.ensemble import RandomForestClassifier
from sklearn.pipeline import make_pipeline
from sklearn.metrics import mean_squared_error, mean_squared_log_error
def rmse(y_true, y_pred):
return np.sqrt(mean_squared_error(y_true, y_pred))
###Output
_____no_output_____
###Markdown
Get validation error
###Code
from xgboost import XGBRegressor
pipeline = make_pipeline(ce.OrdinalEncoder(), XGBRegressor(n_estimators=1000, n_jobs=-1))
pipeline.fit(X_train, y_train_log)
y_pred_log = pipeline.predict(X_val)
print('Validation Error', rmse(y_val_log, y_pred_log))
###Output
/usr/local/lib/python3.6/dist-packages/xgboost/core.py:587: FutureWarning: Series.base is deprecated and will be removed in a future version
if getattr(data, 'base', None) is not None and \
###Markdown
3. Plot model interpretation visualizations- Choose any one of these options: - Permutation Importances plot - Partial Dependency Plot, 1 feature isolation - Partial Dependency Plot, 2 feature interaction Stretch goals- Plot 2+ visualizations.- Use permutation importances for feature selection.
###Code
forest = pipeline.named_steps['XGBRegressor']
importances = pd.Series(forest.feature_importances_, feature)
importances.sort_values().plot.barh()
!pip install eli5 pdpbox category_encoders
encoder = ce.OrdinalEncoder()
X_train_encoded = encoder.fit_transform(X_train)
X_val_encoded = encoder.transform(X_val)
import eli5
from eli5.sklearn import PermutationImportance
permuter = PermutationImportance(pipeline, scoring='neg_mean_squared_error',
cv='prefit', n_iter=2, random_state=42)
permuter.fit(X_train_encoded, y_val_log)
feature_names = X_train_encoded.columns.tolist()
eli5.show_weights(permuter, top=None, feature_names=feature_names)
###Output
_____no_output_____ |
nbs/solutions/Past exam mock.ipynb | ###Markdown
Mock exam paper - solutions Question 1 (a) **definition of a Normal form game**:An \\(N\\) player normal form game consists of:- A finite set of $N$ players- Strategy spaces for the players: $\{S_1,S_2,S_3,\dots,S_N\}$;- Payoff functions for the players: $u_i:S_1\times S_2\dots\times S_N\to\mathbb{R}$[2] (b) **utilities of strategies**(i) $$\sigma_r A \sigma_c = 1\qquad \sigma_r B \sigma_c = 1$$ [1](ii) $$\sigma_r A \sigma_c = 3/2\qquad \sigma_r B \sigma_c = 5/3$$ [1](iii) $$\sigma_r A \sigma_c = 7/4\qquad \sigma_r B \sigma_c = 5/2$$ [1]Some code to verify:
###Code
import numpy as np
A, B = np.array([[3, 1], [0, 2]]), np.array([[2, 1], [0, 3]])
strategies = [(np.array([1, 0]), np.array([0, 1])),
(np.array([1/2, 1/2]), np.array([1/3, 2/3])),
(np.array([1/4, 3/4]), np.array([0, 1]))]
for sigma_r, sigma_c in strategies:
print(np.dot(sigma_r, np.dot(A, sigma_c)), np.dot(sigma_r, np.dot(B, sigma_c)))
###Output
1 1
1.5 1.6666666666666665
1.75 2.5
###Markdown
(c) **define LH**For a nondegenerate 2 player game $(A, B)\in{\mathbb{R}^{m\times n}_{>0}}^2$ the following algorithm returns all nash equilibria:1. Start at the artificial equilibrium: $(0, 0)$ [1]2. Choose a label to drop. [1]3. Remove this label from the corresponding vertex by traversing an edge of the corresponding polytope to another vertex. [1]4. The new vertex will now have a duplicate label in the other polytope. Remove this label from the vertex of the other polytope and traverse an edge of that polytope to another vertex. [1]5. Repeat step 4 until the pair of vertices is fully labelled. [1] (d) **obtain best response polytopes**We start by scaling $A, B$:$$A \to A + 1 = \begin{pmatrix}4&2\\1&3\end{pmatrix} \qquad B \to B + 1 =\begin{pmatrix}3&2\\1&4\end{pmatrix}$$The row player best response polytope $\mathcal{P}$ is defined by $x\geq0, xB\leq 1$:$$x_1 \geq 0\\x_2 \geq 0\\3x_1+x_2 \leq 1\\2x_1+4x_2 \leq 1$$which corresponds to:$$x_1 \geq 0\\x_2 \geq 0\\x_2 \leq 1 - 3x_1\\x_2 \leq 1/4-1/2x_1$$[1]The vertices (and their corresponding labels) are then given by:- $a=(0, 0)$ with labels: $\{0, 1\}$- $b=(1/3, 0)$ with labels: $\{1, 2\}$- $c=(0, 1/4)$ with labels: $\{0, 3\}$- $d=(3/10, 1/10)$ with labels: $\{2, 3\}$[1]The column player best response polytope $\mathcal{Q}$ is defined by $Ax\leq 1, x\geq0$:$$4x_1+2x_2 \leq 1\\x_1+3x_2 \leq 1\\x_1 \geq 0\\x_2 \geq 0\\$$which corresponds to:$$x_2 \leq 1/2 - 2x_1\\x_2 \leq 1/3-1/3x_1\\x_1 \geq 0\\x_2 \geq 0$$[1]The vertices (and their corresponding labels) are then given by:- $w=(0, 0)$ with labels: $\{2, 3\}$- $x=(1/4, 0)$ with labels: $\{0, 3\}$- $y=(0, 1/3)$ with labels: $\{1, 2\}$- $z=(1/10, 3/10)$ with labels: $\{0, 1\}$[1] (e) Drawing the best response polytope[1] mark for each polytope.
###Code
%matplotlib inline
import matplotlib.pyplot as plt
import numpy as np
import scipy.spatial
V = [np.array([0, 0]), np.array([1 / 3, 0]), np.array([0, 1 / 4]), np.array([3/10, 1/10])]
P = scipy.spatial.ConvexHull(V)
scipy.spatial.convex_hull_plot_2d(P)
plt.title("$\mathcal{P}$")
plt.text(0.001, .1, "Label: 0")
plt.text(0.15, .005, "Label: 1")
plt.text(0.15, .18, "Label: 3")
plt.text(0.26, .05, "Label: 2")
for v, s in zip(V, "abcd"):
plt.text(v[0] + 0.001, v[1] + 0.001, s);
V = [np.array([0, 0]), np.array([1 / 4, 0]), np.array([0, 1 / 3]), np.array([1/10, 3/10])]
Q = scipy.spatial.ConvexHull(V)
scipy.spatial.convex_hull_plot_2d(Q)
plt.title("$\mathcal{Q}$")
plt.text(0.001, .15, "Label: 2")
plt.text(0.10, .005, "Label: 3")
plt.text(0.05, .32, "Label: 1")
plt.text(0.2, .1, "Label: 0")
for v, s in zip(V, "wxyz"):
plt.text(v[0] + 0.001, v[1] + 0.001, s);
###Output
_____no_output_____
###Markdown
(f) Carrying out the LH algorithmUsing the plot we can carry out the Lemke-Howson algorithm:- Dropping label 0: - $(a, w)$ have labels $\{0, 1\}, \{2, 3\}$. Drop 0. - $\to (b, w)$ have labels $\{1, 2\}, \{2, 3\}$. In $\mathcal{Q}$ drop 2. - $\to (b, x)$ have labels $\{1, 2\}, \{0, 3\}$. Fully labeled vertex pair.- Dropping label 1: - $(a, w)$ have labels $\{0, 1\}, \{2, 3\}$. Drop 1. - $\to (c, w)$ have labels $\{0, 3\}, \{2, 3\}$. In $\mathcal{Q}$ drop 3. - $\to (c, y)$ have labels $\{0, 3\}, \{1, 2\}$. Fully labeled vertex pair. - Dropping label 2: - $(a, w)$ have labels $\{0, 1\}, \{2, 3\}$. Drop 2. - $\to (a, x)$ have labels $\{0, 1\}, \{0, 3\}$. In $\mathcal{P}$ drop 0. - $\to (b, x)$ have labels $\{1, 2\}, \{0, 3\}$. Fully labeled vertex pair.- Dropping label 3: - $(a, w)$ have labels $\{0, 1\}, \{2, 3\}$. Drop 3. - $\to (a, y)$ have labels $\{0, 1\}, \{1, 2\}$. In $\mathcal{P}$ drop 1. - $\to (c, y)$ have labels $\{0, 3\}, \{1, 2\}$. Fully labeled vertex pair. [2] We see that we have obtained two equilibria: $$(b, x) = ((1/3, 0), (1/4, 0))$$$$(c, y) = ((0, 1/4), (0, 1/3))$$which gives the following two Nash equilibria:$$((1, 0), (1, 0))$$$$((0, 1), (0, 1))$$[2]Some code to verify:
###Code
import nashpy as nash
A, B = np.array([[3, 1], [0, 2]]), np.array([[2, 1], [0, 3]])
game = nash.Game(A, B)
for label, eq in enumerate(game.lemke_howson_enumeration()):
print(label, eq)
###Output
0 (array([1., 0.]), array([1., 0.]))
1 (array([0., 1.]), array([0., 1.]))
2 (array([1., 0.]), array([1., 0.]))
3 (array([0., 1.]), array([0., 1.]))
###Markdown
(g) **Using a different initial vertex pair**- $(b, x)$ has labels $\{1, 2\}, \{0, 3\}$. Drop 3.- $\to (b, z)$ have labels $\{1, 2\}, \{0, 1\}$. In $\mathcal{P}$ drop 1.- $\to (d, y)$ have labels $\{2, 3\}, \{0, 1\}$. Fully labeled vertex pair.[1]This gives the Nash equilibria:$$((3/4, 1/4), (1/4, 3/4))$$some code to verify the result:
###Code
list(game.vertex_enumeration())
###Output
_____no_output_____
###Markdown
(h) Sketch of proof- We know that there is a path between $(0, 0)$ and a fully labelled vertex pair. [1]- Similarly, from a fully labelled vertex pair we can assume that it's possible to drop another fully labelled vertex pairs. [1]- We can construct a graph of pairs of fully labelled vertex pairs. [1]- As we have pairs this corresponds to an even number of fully labelled vertex pairs. Removing $(0, 0)$ this implies there is an odd number of Nash equilibria. [1] Question 2 (a) Definition of a Prisoner's Dilemma (bookwork)$$A =\begin{pmatrix} R & S\\ T & P\end{pmatrix}\qquadB =\begin{pmatrix} R & T\\ S & P\end{pmatrix}$$with the following constraints:$$T > R > P > S$$$$2R > T + S$$[2] (b) Finding valid Prisoner's Dilemmas(i) For $A, B$ to be valid we need:$$\begin{pmatrix} 3 & S\\ 5 & 1\end{pmatrix}=\begin{pmatrix} R & S\\ T & P\end{pmatrix}$$which gives: $R=3, T=5, P=1$further more:$$\begin{pmatrix} 3 & T\\ -1 & 1\end{pmatrix}=\begin{pmatrix} R & T\\ S & P\end{pmatrix}$$[2]which gives: $R=3, S=-1, P=1$Thus we have (R, S, P, T) = (3, -1, 1, 5) which also follows the two required inequalities:$$T > R > P > S \Leftrightarrow 5 > 3 > 1 > -1$$$$2R > T + S \Leftrightarrow 6 > 4$$[2](ii) For $A, B$ to be valid we need:$$\begin{pmatrix} 2 & S\\ -2 & 1\end{pmatrix}=\begin{pmatrix} R & S\\ T & P\end{pmatrix}$$which gives: $R=2, T=-2, P=1$We immediately see that $R > T$ so this cannot be a Prisoner's Dilemma.[4] (c) Markov chain representation of a reactive player match$$M = \begin{pmatrix}3/10 & 3/10 & 1/5 & 1/5\\3/8 & 3/8 & 1/8 & 1/8\\3/20 & 9/20 & 1/10 & 3/10\\3/16 & 9/16 & 1/16 & 3/16\\\end{pmatrix}$$ (d) Expected utility (bookwork)The first player:$$s_1s_2\times R + s1(1-s_2) \times S + (1-s_1)s_2 \times T + (1-s_1)(1-s_2)\times P$$The second player:$$s_1s_2\times R + s1(1-s_2) \times T + (1-s_1)s_2 \times S + (1-s_1)(1-s_2)\times P$$where:$$s_1 = \frac{q_2r_1+p_2}{1-r_1r_2}\qquad s_2 = \frac{p_2r_2+q_2}{1-r_1r_2}$$for:$$r_1=p_1-p_2\qquad r_2=q_1-q_2$$ (e) Expected utility for specific type of playerWe have:$$r_1=x-x/2=x/2\qquad r_2=1/2-1/4=1/4$$thus$$s_1 = \frac{1/4x/2+x/2}{1-x/8}=\frac{5x}{8-x}\qquad s_2 = \frac{1/4x/2+1/4}{1-x/8}=\frac{x+2}{8-x}$$Direct substitution gives:$$\frac{5x(x+2)}{(8-x)^2}\times R + \frac{5x(6-2x)}{(8-x)^2} \times S + \frac{(8-6x)(x+2)}{(8-x)^2} \times T + \frac{(8-6x)(6-2x)}{(8-x)^2}\times P$$ expanding:$$\frac{R\times(5x^2+10x) + S\times (35x-10x^2) + T\times (-6x^2+16-4x) + P\times (48-52x+12x^2)}{(8-x)^2}$$substituting $(R, S, T, P)=(3, 0, 4, 1)$:$$\frac{3x^2 - 38x + 112}{(8-x)^2}$$factorising the $(8-x)$ term gives:$$\frac{(8-x)(14-3x)}{(8-x)^2}=\frac{\left(3 x-14\right)}{\left(x - 8\right)} $$ Some code to verify the calculations:
###Code
import sympy as sym
x, R, S, T, P = sym.symbols("x, R, S, T, P")
expr = R * (5 * x ** 2 + 10 * x) + S * (35 * x - 10 * x ** 2) + T * (-6 * x ** 2 + 16 - 4 * x) + P *(48 - 52 * x + 12 * x ** 2)
expr.subs({R:3, S:0, T:4, P:1}).factor()
import numpy as np
def theoretic_steady_state(p, q):
r_1 = p[0] - p[1]
r_2 = q[0] - q[1]
s_1 = (q[1] * r_1 + p[1]) / (1 - r_1 * r_2)
s_2 = (p[1] * r_2 + q[1]) / (1 - r_1 * r_2)
return np.array([s_1 * s_2, s_1 * (1 - s_2), (1 - s_1) * s_2, (1 - s_1) * (1 - s_2)])
def theoretic_utility(p, q, rstp=np.array([3, 0, 5, 1])):
pi = theoretic_steady_state(p, q)
return np.dot(pi, rstp)
p = np.array([x, x / 2])
q = np.array([sym.S(1) / 2, sym.S(1) / 4])
expr = theoretic_utility(p=p, q=q, rstp=np.array([3, 0, 4, 1]))
sym.factor(expr)
###Output
_____no_output_____
###Markdown
Here is a plot of the utility (calculated both ways)**Note that this is not requested in the question: just here to help understanding.**
###Code
xs = np.linspace(0, 1, 25)
ys = [theoretic_utility(p=[x, x / 2], q=q, rstp=np.array([3, 0, 4, 1])) for x in xs]
simplified_ys = [(3 * x - 14) / (x - 8) for x in xs]
plt.plot(xs, ys, label="$u(x)$")
plt.scatter(xs, simplified_ys, label="$\\frac{3x-14}{x-8}$")
plt.legend();
###Output
_____no_output_____
###Markdown
(f) Identifying the optimal behaviourDirect differentiating gives:$$\frac{3x - 24 - (3x - 14)}{\left(x - 8\right)^2}=\frac{-10}{\left(x - 8\right)^{2}}$$[2]simplifying gives the required result.We see that our function is decreasing for all values of $x$ (it has negative derivative). Thus, the optimal value of $x$ is $0$.[2]
###Code
sym.factor(expr.diff(x))
###Output
_____no_output_____
###Markdown
Question 3 (a) Defining equations (bookwork)The matrix $A$ correspods to the utility of a row player in a game where the row player is a given individual and the column player is the population.This gives:$$f_1=ax_1+bx_2\qquad f_2=cx_1+dx_2$$[1]or equivalently:$$f=Ax\qquad \phi=fx$$thus we have the same equation as before but in matrix notation:$$\frac{dx}{dt}=x(f-\phi)$$[1] (b) Defining mutated population (bookwork)Given a strategy vector $x=(x_1, x_2)$, some $\epsilon>0$ and another strategy $y=(y_1, y_1)$, the post entry population $x_{\epsilon}$ is given by:$$x_{\epsilon} = (x_1 + \epsilon(y_1 - x_1), x_2 + \epsilon(y_2 - x_2))$$[2] (c) Defining an evolutionary stable strategy (bookwork)Given a stable population distribution, $x$ it represents an **Evolutionary Stable Strategy** (ESS) if and only if there exists $\bar\epsilon>0$:$$u(x, x_{\epsilon})>u(y, x_{\epsilon})\text{ for all }0<\epsilon<\bar\epsilon, y$$[1]where $u(x, y)$ corresponds to the fitness of strategy $x$ in population $y$ which is given by:$$xAy^T$$[1] (d) Proof of result for ESS (bookwork)**Theorem:**If $x$ is an ESS, then for all $y\ne x$, either:1. $u(x,x)>u(y,x)$2. $u(x,x)=u(y,x)$ and $u(x,y)>u(y,y)$---Conversely, if either (1) or (2) holds for all $y\ne x$ then $x$ is an ESS.[2]---**Proof:**---If $x$ is an ESS, then by definition:$$u(x,x_{\epsilon})>u(y,x_{\epsilon})$$which corresponds to:$$(1-\epsilon)u(x,x)+\epsilon u(x,y)>(1-\epsilon)u(y,x)+\epsilon u(y,y)$$- If condition 1 of the theorem holds then the above inequality can be satisfied for \\(\epsilon\\) sufficiently small. If condition 2 holds then the inequality is satisfied.- Conversely: - If $u(x,x) < u(y,x)$ then we can find $\epsilon$ sufficiently small such that the inequality is violated. [1] - If $u(x, x) = u(y,x)$ and $u(x,y) \leq u(y,y)$ then the inequality is violated. [1] (e) Obtain all ESS for a 2 by 2 gameFirst step is to identify the Nash equilibria. Identify best responses for the associated two player game $(A, A^T)$:$$A=\begin{pmatrix}1&\underline{4}\\\underline{2}&1\end{pmatrix}\qquadA^T=\begin{pmatrix}1&\underline{2}\\\underline{4}&1\end{pmatrix}$$[1]This immediately gives 2 pure Nash equilibria, however they are not symmetrix so are not:$$(\sigma_r, \sigma_c) \in \{((1, 0), (0, 1)), ((0, 1), (1, 0))\}$$To find the remaining Nash equilibria, we use the support enumeration algorithm which gives:$${\sigma_r}_1 + 4{\sigma_r}_2 = 2{\sigma_r}_1 + {\sigma_r}_2$$$${\sigma_c}_1 + 4{\sigma_c}_2 = 2{\sigma_c}_1 + {\sigma_c}_2$$This gives:$$3{\sigma_r}_2 = {\sigma_r}_1$$which gives, a final Nash equilibrium of:$$x = (3/4, 1/4)$$[2]some code to verify this:
###Code
A = np.array([[1, 4], [2, 1]])
game = nash.Game(A, A.transpose())
list(game.support_enumeration())
###Output
_____no_output_____
###Markdown
Considering this final equilibria, we have:$$u(x, x)=u(y, x)\text{ for all }y$$Applying the theorem we thus consider:$$u(x, y)=(5/4)y_1 + (13/4)y_2=(5/4)y_1 + (13/4)(1-y_1)=-2y_1+13/4$$\begin{align}u(y, y)&=y_1^2+4y_1y_2+2y_1y_2+y_2^2\\ &=y_1^2+6y_1-6y_1^2+1 - 2y_1 + y_1^2\\ &=1+4y_1-4y_1^2\end{align}Thus:$$u(x, y) - u(y, y) = 4y_1^2 - 6y_1+9/4 = 4(y_1 - 3/4)^2$$[2]however $y_1\ne3/4$ thus $x=(3/4, 1/4)$ is an ESS. (f) Discussing the research paper- (i) This paper looks at GT and cancer: it is a proof of concept sitting in literature that already exists on the subject. The main result is confirming intuitive understanding of proliferation/motility. Increased nutrients implies less motility. This is done in two ways: a theoretic game and a simulation. [3]- (ii) The game matrix should in fact be: $\begin{pmatrix}b/2 & b - c\\ b & b - c / 2\end{pmatrix}$ [2]- (iii) The theorem is used in the paper, although it is not done explicitly it is used to implictely describe the stability of the equilibria. [2]- (iv) Another approach would be to use a Moran process. The main difference with the approach in the paper would be to use a Moran process. This would correspond to considering a finite population size as opposed to the infinite population model used. This would correspond more closely to the simulation model used in the paper. [3] Question 4 (a) Definition of a Moran process on a gameConsider a matrix $A\in\mathbb{R}^{m\times n}$ representing a game with two strategies. $$A=\begin{pmatrix} a & b\\ c & d\end{pmatrix}$$The Moran process is as follows:- At a given time step: all individuals play all other individuals.- Obtain their fitness as given by the game.- Randomly select an individual proportional to their fitness as an individual to be reproduced- Uniformly select an individual to be replaced- Proceed to the next time step.- The process terminates when there is only one type of individual in the population. (b) Theorem**Theorem: Fixation probabilities for the birth death process**Given a birth death process as defined above, the fixation probability $x_i$ is given by:$$x_i=\frac{1+\sum_{j=1}^{i-1}\prod_{k=1}^j\gamma_k}{1+\sum_{j=1}^{N-1}\prod_{k=1}^j\gamma_k}$$where:$$\gamma_k = \frac{p_{k,k-1}}{p_{k,k+1}}$$**Proof**We have:\begin{align} p_{i,i+1}x_{i+1} & = -p_{i,i-1}x_{i-1} + x_i(1 - p_{ii}) \\ p_{i,i+1}x_{i+1} & = p_{i,i-1}(x_{i} - x_{i-1}) + x_ip_{i,i+1} \\ x_{i+1} - x_i & = \frac{p_{i, i-1}}{p_{i, i+1}}(x_i-x_{i-1})=\gamma_i(x_i-x_{i-1})\end{align}We observe that:\begin{align} x_2 - x_1 &= \gamma_1(x_1-x_{0})=\gamma_1x_1\\ x_3 - x_2 &= \gamma_2(x_2-x_1)=\gamma_2\gamma_1x_1\\ x_4 - x_3 &= \gamma_3(x_3-x_2)=\gamma_3\gamma_2\gamma_1x_1\\ &\; \vdots & \\ x_{i+1} - x_i &= \gamma_i(x_i-x_{i-1})=\prod_{k=1}^i\gamma_kx_1\\ &\; \vdots & \\ x_{N} - x_{N-1} &= \gamma_{N-1}(x_{N-1}-x_{N-2})=\prod_{k=1}^{N-1}\gamma_kx_1\\\end{align}thus we have:$$x_i=\sum_{j=0}^{i-1}x_{j+1}-x_j=\left(1+\sum_{j=1}^{i-1}\prod_{k=1}^j\gamma_k\right)x_1$$we complete the proof by solving the following equation to obtain $x_1$:$$x_N=1=\left(1+\sum_{j=1}^{N-1}\prod_{k=1}^j\gamma_k\right)x_1$$ (c) Moran process for the gameAssuming $i$ individuals of the first type, for this game we have $N=5$ and $(a, b, c, d)=(4, 1, 6, 2)$ the fitness of both types is given respectively by:$$f_{1i}=\frac{a(i-1)+b(N-i)}{N-1}=\frac{4i-4+5-i}{4}=\frac{3i+1}{4}$$$$f_{2i}=\frac{c(i)+d(N-i-1)}{N-1}=\frac{6i+2(4-i)}{4}=\frac{4i+8}{4}=i$$which gives:$$\gamma_i=\frac{f_{2i}}{f_{1i}}=\frac{4i+8}{3i+1}$$thus:$$x_1=\frac{1}{1+\sum_{j=1}^{4}\prod_{k=1}^j\frac{4k+8}{3k+1}}=\frac{1}{1+3+3(16/7)+3(16/7)2+3(16/7)2(24/13)}=\frac{91}{4540}$$$$x_{4}=\frac{1+\sum_{j=1}^{3}\prod_{k=1}^j\frac{4k+8}{3k+1}}{1+\sum_{j=1}^{4}\prod_{k=1}^j\frac{4k+8}{3k+1}}=\frac{1+3+3(16/7)+3(16/7)2}{1+3+3(16/7)+3(16/7)2+3(16/7)2(24/13)}=\frac{559}{1135}$$Thus the two fixation probabilities $x_1$ are:- Cooperation: $\frac{91}{4540}\approx 0.02$- Defection $1-\frac{559}{1135}=\frac{576}{1135}\approx 0.51$Some code to verify the result:
###Code
def theoretic_fixation(N, game, i=1):
"""
Calculate x_i as given by the above formula
"""
f_ones = np.array([(game[0, 0] * (i - 1) + game[0, 1] * (N - i)) / (N - 1) for i in range(1, N)])
f_twos = np.array([(game[1, 0] * i + game[1, 1] * (N - i - 1)) / (N - 1) for i in range(1, N)])
gammas = f_twos / f_ones
return (1 + np.sum(np.cumprod(gammas[:i-1]))) / (1 + np.sum(np.cumprod(gammas)))
import sympy as sym
game = np.array([[sym.S(4), sym.S(1)], [sym.S(6), sym.S(2)]])
theoretic_fixation(N=5, game=game), 1- theoretic_fixation(N=5, game=game, i=4)
[float(x) for x in _]
###Output
_____no_output_____
###Markdown
(d) Game for TfT vs AlternatorThe history of 5 turns would give between TfT and Alternator1. TfT: C Alternator: C with scores: (4, 4)2. TfT: C Alternator: D with scores: (1, 6)3. TfT: D Alternator: C with scores: (6, 1)4. TfT: C Alternator: D with scores: (1, 6)5. TfT: D Alternator: C with scores: (6, 1)The history of 5 turns would give between TfT and TfT1. TfT: C TfT: C with scores: (4, 4)2. TfT: C TfT: C with scores: (4, 4)3. TfT: C TfT: C with scores: (4, 4)4. TfT: C TfT: C with scores: (4, 4)5. TfT: C TfT: C with scores: (4, 4)The history of 5 turns between Alternator and Alternator:1. Alternator: C Alternator: C with scores: (4, 4)2. Alternator: D Alternator: D with scores: (2, 2)3. Alternator: C Alternator: C with scores: (4, 4)4. Alternator: D Alternator: D with scores: (2, 2)5. Alternator: C Alternator: C with scores: (4, 4)Summing these, the payoff matrix is thus given by:$$\begin{pmatrix}20 & 18 \\18 & 16\\\end{pmatrix}$$ (e) Moran calculationsRepeating the previous calculations:$$f_{1i}=\frac{a(i-1)+b(N-i)}{N-1}=\frac{20(i - 1) + 18(5-i)}{4}=\frac{2i+70}{4}$$$$f_{2i}=\frac{c(i)+d(N-i-1)}{N-1}=\frac{18i+16(4-i)}{4}=\frac{2i+64}{4}=i$$which gives:$$\gamma_i=\frac{f_{2i}}{f_{1i}}=\frac{2i+64}{2i+70}$$thus:$$x_1=\frac{1}{1+\sum_{j=1}^{4}\prod_{k=1}^j\frac{2k+64}{2k+70}}=\frac{1}{1+11/12+(11/12)(34/37)+(11/12)(34/37)(35/38)+(11/12)(34/37)(35/38)(12/13)}=\frac{247}{1050}$$$$x_4=\frac{1+\sum_{j=1}^{3}\prod_{k=1}^j\frac{2k+64}{2k+70}}{1+\sum_{j=1}^{4}\prod_{k=1}^j\frac{2k+64}{2k+70}}=\frac{1+11/12+(11/12)(34/37)+(11/12)(34/37)(35/38)}{1+11/12+(11/12)(34/37)+(11/12)(34/37)(35/38)+(11/12)(34/37)(35/38)(12/13)}=\frac{923}{1110}$$Thus the two fixation probabilities $x_1$ are:- TfT: $\frac{247}{1050}\approx 0.24$- Alternator $1-\frac{923}{1110}=\frac{187}{1110}\approx 0.17$Some code to verify the result:
###Code
game = np.array([[sym.S(20), sym.S(18)], [sym.S(18), sym.S(16)]])
theoretic_fixation(N=5, game=game), 1 - theoretic_fixation(N=5, game=game, i=4)
[float(x) for x in _]
###Output
_____no_output_____ |
generating_propositions/filtering_probes.ipynb | ###Markdown
Filtering the probes generated using VisDial dialogues The script `main.py` manipulates QA pairs from VisDial dialogues and turn them into propositions/probes. The files are saved in `propositions/original/propositions_{set}.json`. However, two post-processing steps are necessary to improve the datasets before they are used for the classification tasks:- the proportion of caption probes in relation to other turns is too much higher, thus we sample them to make the distribution closer to uniform among turns.- to make sure the training set has no bias with respect to the True/False dimension, we also downsample it to enforce that, for every true probe, a false counterpart of the same type is included.
###Code
import json
import csv
import copy
import random
import matplotlib.pyplot as plt
from collections import Counter
from pathlib import Path
from IPython.display import Image
PATH_TO_PROBES = 'propositions/'
random.seed(2204)
###Output
_____no_output_____
###Markdown
Downsampling captions and removing `how_many` rule When we first run the `analysis_generated_probes` notebook, we notice that many more caption probes are generated, making turn_shared=0 being manipulated too often in comparison to the other turns.
###Code
Image(filename='propositions/original-propositions_turn-distribution.png')
###Output
_____no_output_____
###Markdown
So we begin by downsampling caption probes so that the final distribution is roughly uniform among turns.
###Code
CAPTION_PROPORTION = 0.15 # select a small number
propositions = {}
with open(PATH_TO_PROBES + 'original/propositions_train.json', 'r') as data:
propositions['train'] = json.load(data)
with open(PATH_TO_PROBES + 'original/propositions_val.json', 'r') as data:
propositions['val'] = json.load(data)
with open(PATH_TO_PROBES + 'original/propositions_test.json', 'r') as data:
propositions['test'] = json.load(data)
###Output
_____no_output_____
###Markdown
We generated the human sample with a previous version of the propositions dataset, so we just make sure to include all datapoints in the test set again.
###Code
human_sample = []
eval_props = {}
eval_captions = {}
with open('propositions/human-results.csv', newline='') as csvfile:
reader = csv.reader(csvfile)
for username, _, _, dialog_id, _, prop_id, _, _, _, _, turn_shared, _, truefalse, *_, sentence in reader:
if username == 'username':
continue
a_thinks_true = 1 if 'entailment' in truefalse else 0
human_sample.append((int(dialog_id), int(turn_shared), a_thinks_true))
eval_props[(int(dialog_id), int(turn_shared), a_thinks_true)] = sentence
if turn_shared == '0':
eval_captions[int(dialog_id)] = (prop_id, a_thinks_true, sentence)
assert len(human_sample) == 300
human_sample = set(human_sample)
assert len(human_sample) == 100
assert len(eval_props) == 100
###Output
_____no_output_____
###Markdown
The datapoints below (dialogues 639, 1447, 2807, 4718, 7683, 4327) derive from rules that can invert order (```what_color``` and ```image_in_color```). So we put them back into the order that was used in the sample.
###Code
need_readjust_order = [(639, 2, 1), (1477, 1, 1), (2897, 1, 1), (4718, 3, 1), (7683, 4, 1), (4327, 1, 0)]
def readjust_order(d_id, proposition):
# some hacking to make sure that the original human datapoints are not excluded
# because the new generation changed the order of the colors or the word picture/image
# in the proposition, we replace them.
turn_shared = proposition['turn_shared']
a_thinks_true = proposition['a_thinks_true']
sentence = proposition['proposition']
if (d_id, turn_shared, a_thinks_true) == (639, 2, 1):
assert set(sentence) == set('the train is green and yellow.')
print(sentence)
return 'the train is green and yellow.'
if (d_id, turn_shared, a_thinks_true) == (1477, 1, 1):
assert set(sentence) == set('the bus is white, green and blue.')
print(sentence)
return 'the bus is white, green and blue.'
if (d_id, turn_shared, a_thinks_true) == (2897, 1, 1):
assert set(sentence) == set('the plane is white and red.')
print(sentence)
return 'the plane is white and red.'
if (d_id, turn_shared, a_thinks_true) == (4718, 3, 1):
assert set(sentence) == set('the man wearing a bathrobe\'s robe is brown and orange.')
print(sentence)
return 'the man wearing a bathrobe\'s robe is brown and orange.'
if (d_id, turn_shared, a_thinks_true) == (7683, 4, 1):
assert set(sentence) == set('the wall is white and green.')
print(sentence)
return 'the wall is white and green.'
if (d_id, turn_shared, a_thinks_true) == (4327, 1, 0):
assert sentence in ['the picture is not in color.', 'the image is not in color.', 'the photo is not in color.']
print(sentence)
return 'the picture is not in color.'
print('!!!')
return None
def check_conditions(d, p, turn_shared, split, sentence):
# we check the even indexes, because when we exclude a probe we exclude it's negated
# counterpart as well, which is p+1, so that we keep the same pattern as the other
# manupulation rules (all probes appear with its negation in the sets)
if int(p) % 2 == 0 and turn_shared == 0:
if split != 'test':
return True
# in the captions, we are sure that the order of p (should not have) did not change...
# coref is not used for the captions and the Spacy version should be the same.
# in any case, this will be enforced/checked in the next steps
elif d not in eval_captions:
return True
else:
# do not sample out a caption that occurred in the humal results, or its counterpart.
prop_id = int(eval_captions[d][0])
if p != prop_id and p+1 != prop_id:
return True
return False
downsampled_captions = copy.deepcopy(propositions)
for split in ('train', 'val', 'test'):
for d, dialogue in propositions[split]['dialogues'].items():
for p, proposition in dialogue.items():
turn_shared = int(proposition['turn_shared'])
a_thinks_true = int(proposition['a_thinks_true'])
if split == 'test' and (int(d), turn_shared, a_thinks_true) in need_readjust_order:
downsampled_captions[split]['dialogues'][d][p]['proposition'] = readjust_order(int(d), proposition)
elif check_conditions(int(d), int(p), turn_shared, split, proposition['proposition']):
if random.random() > CAPTION_PROPORTION:
del downsampled_captions[split]['dialogues'][d][p]
del downsampled_captions[split]['dialogues'][d][str(int(p)+1)]
for split in ('train', 'val', 'test'):
with open(PATH_TO_PROBES + '/downsampled-propositions_'+split+'.json', 'w') as f:
json.dump(downsampled_captions[split], f)
###Output
the train is green and yellow.
the bus is white, green and blue.
the plane is white and red.
the photo is not in color.
the man wearing a bathrobe's robe is orange and brown.
the wall is white and green.
###Markdown
Make sure that all datapoints in the human sample. The sample has 100 datapoints, but we'll exclude 3 how_many and 1 that was filtered out (contained word fat), so it must be 96.
###Code
split = 'test'
check = set()
for (d, turn_shared, a_thinks_true) in human_sample:
all_props = set([prop['proposition'] for p, prop in downsampled_captions[split]['dialogues'][str(d)].items()])
if eval_props[(d, turn_shared, a_thinks_true)].lower() not in all_props:# and d not in ('639', '1477', '4327'):
print(d, turn_shared, eval_props[(d, turn_shared, a_thinks_true)].lower())
check.add(d)
print(f'The missing dialogues are: {check}.')
###Output
The missing dialogues are: {7054, 6447, 14, 7417, 1310}.
###Markdown
In total, 6 datapoints must be ignored in the human results sample. Another problem that showed up later: the `how_many` as is implemented rule breaks our assumptions. We have removed them during generation instead.- (798, 1, false): how_many, was excluded- (6447, 1, false): how_many, was excluded- (1310, 2, true): how_many, was excluded(7347, 2, false): dialogue would be filtered out, but I put it back in the sample Three changed due to the new coref resolution:- (14, 3, 1): 'there is no water coming off from it.' is now 'there is no water coming off from the red fire hydrant.'- (7054, 6, 0): 'the photo is in a house.' is now 'the old fashioned motorcycle is in a house.'- (7417, 3, 1): "One cannot see any what the 3're looking at." is now "one cannot see any what the kids're looking at." For captions, we can (supposedly) rely on the proposition ID. Coref resolution is not used when generating caption propositions and the rules did not change. By using the same Spacy version, nothing should change. For the remaining rules, the prop_id is not reliable because by removing the ```how_many``` rule and with the new coref model, besides other small adjustments in the rules (```look_like```), their order may have changed. What persists is the triplet (dialogue_id, turn_shared, a_thinks_true).
###Code
split = 'test'
check = set()
for (d, turn_shared, a_thinks_true) in human_sample:
if turn_shared == 0:
(prop_id, _, sentence) = eval_captions[d]
assert sentence.lower() == downsampled_captions[split]['dialogues'][str(d)][prop_id]['proposition']
#print(sentence.lower())
#print(downsampled_captions[split]['dialogues'][str(d)][prop_id]['proposition'])
#print('\n')
elif d in (14, 1310, 7054, 7417, 798, 6447):
# these were excluded
continue
else:
sentence = eval_props[(d, turn_shared, a_thinks_true)]
props_list = downsampled_captions[split]['dialogues'][str(d)]
item = [prop for idx, prop in props_list.items() if prop['turn_shared'] == turn_shared and prop['a_thinks_true'] == a_thinks_true]
assert sentence.lower() == item[0]['proposition']
#print(sentence.lower())
#print(item[0]['proposition'])
#print('\n')
###Output
_____no_output_____
###Markdown
Making the training set balanced Some probes/rules may occur more often as true/false, and that would introduce bias on the main task. In order to avoid this true/false bias in the training probes, we manufacture a training set that, for each true probe included, a false counterpart with the same surface form is included (i.e. if 'it is sunny' is included wrt an image+dialogue where it is true, we for sure include another 'it is sunny' probe paired with an image+dialogue that makes it false).
###Code
random.seed(2204)
###Output
_____no_output_____
###Markdown
Select a maximum to clip and avoid probes that occur too often:
###Code
CLIP = 1000
path_props = Path(PATH_TO_PROBES, 'downsampled-propositions_train.json')
with open(path_props, 'r') as f:
props_data = json.load(f)
probe_sentences = {}
for d_id, dialogue in props_data['dialogues'].items():
for p_id, prop in dialogue.items():
sent = prop['proposition']
if sent not in probe_sentences:
probe_sentences[sent] = {'true':[], 'false':[]}
status = 'true' if prop['a_thinks_true'] == 1 else 'false'
probe_sentences[sent][status].append((d_id, p_id))
usable = [s for s in probe_sentences.keys() if len(probe_sentences[s]['true']) > 0 and len(probe_sentences[s]['false']) > 0]
print(f'Number of probe types that occur both as true and false: {len(usable)}.')
sample = {s: {'true': [], 'false': []} for s in usable}
datapoints = []
freqs = Counter()
for s, occurrences in probe_sentences.items():
n = min(len(occurrences['true']), len(occurrences['false']))
n = min(n, CLIP) # select a maximum to clip
if n == 0:
continue
selected_true = random.sample(probe_sentences[s]['true'], n)
selected_false = random.sample(probe_sentences[s]['false'], n)
sample[s]['true'].append(selected_true)
sample[s]['false'].append(selected_false)
datapoints += selected_true
datapoints += selected_false
freqs.update({s: n*2})
print(f'Size of the new training set will be: {sum(freqs.values())}')
freq_probes = Counter(freqs.values())
plt.bar(freq_probes.values(), freq_probes.keys(), width=1)
#plt.ylim(0, 100)
plt.xlabel('n probe types')
plt.ylabel('n occurrences on balanced train set')
plt.xlim(0,250)
plt.show()
###Output
_____no_output_____
###Markdown
Example:
###Code
sample['there is a dog.']['true']
sample['there is a dog.']['false']
###Output
_____no_output_____
###Markdown
Saving as file:
###Code
props_data.keys()
balanced_props = {'orig_data': props_data['orig_data'], 'set': props_data['set'],
'dialogues':{str(x): {} for x in range(len(props_data['dialogues']))}}
for (d_id, p_id) in set(datapoints):
prop = props_data['dialogues'][d_id][p_id]
if d_id not in balanced_props['dialogues']:
balanced_props['dialogues'][d_id] = {}
balanced_props['dialogues'][d_id][p_id] = prop
n_samples = len([1 for d in balanced_props['dialogues'].values() for p in d])
print(n_samples)
###Output
344988
###Markdown
Number of training datapoints will be:
###Code
n_samples*11 # n_probes * 11 turns
###Output
_____no_output_____
###Markdown
Save the new training file:
###Code
with open(Path(PATH_TO_PROBES, 'downsampled-balanced-propositions_train.json'), 'w') as f:
json.dump(balanced_props, f)
###Output
_____no_output_____ |
curriculum/unit-1-statistics-fundamentals/sprint-1-data-wrangling-and-storytelling/module3-join-and-reshape-data/LS_DS_113_Join_and_Reshape_Data.ipynb | ###Markdown
Lambda School Data Science*Unit 1, Sprint 1, Module 3*--- Join and Reshape datasetsObjectives- concatenate data with pandas- merge data with pandas- understand tidy data formatting- melt and pivot data with pandasLinks- [Pandas Cheat Sheet](https://github.com/pandas-dev/pandas/blob/master/doc/cheatsheet/Pandas_Cheat_Sheet.pdf)- [Tidy Data](https://en.wikipedia.org/wiki/Tidy_data) - Combine Data Sets: Standard Joins - Tidy Data - Reshaping Data- Python Data Science Handbook - [Chapter 3.6](https://jakevdp.github.io/PythonDataScienceHandbook/03.06-concat-and-append.html), Combining Datasets: Concat and Append - [Chapter 3.7](https://jakevdp.github.io/PythonDataScienceHandbook/03.07-merge-and-join.html), Combining Datasets: Merge and Join - [Chapter 3.8](https://jakevdp.github.io/PythonDataScienceHandbook/03.08-aggregation-and-grouping.html), Aggregation and Grouping - [Chapter 3.9](https://jakevdp.github.io/PythonDataScienceHandbook/03.09-pivot-tables.html), Pivot Tables Reference- Pandas Documentation: [Reshaping and Pivot Tables](https://pandas.pydata.org/pandas-docs/stable/reshaping.html)- Modern Pandas, Part 5: [Tidy Data](https://tomaugspurger.github.io/modern-5-tidy.html)- [Hadley Wickham's famous paper](http://vita.had.co.nz/papers/tidy-data.html) on Tidy Data Download dataWe’ll work with a dataset of [3 Million Instacart Orders, Open Sourced](https://tech.instacart.com/3-million-instacart-orders-open-sourced-d40d29ead6f2)!
###Code
!wget https://s3.amazonaws.com/instacart-datasets/instacart_online_grocery_shopping_2017_05_01.tar.gz
!tar --gunzip --extract --verbose --file=instacart_online_grocery_shopping_2017_05_01.tar.gz
%cd instacart_2017_05_01
!ls -lh *.csv
###Output
-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv
-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv
-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv
-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv
-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv
-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv
###Markdown
Join Datasets Goal: Reproduce this exampleThe first two orders for user id 1:
###Code
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*vYGFQCafJtGBBX5mbl0xyw.png'
example = Image(url=url, width=600)
display(example)
###Output
_____no_output_____
###Markdown
Load dataHere's a list of all six CSV filenames
###Code
!ls -lh *.csv
###Output
-rw-r--r-- 1 502 staff 2.6K May 2 2017 aisles.csv
-rw-r--r-- 1 502 staff 270 May 2 2017 departments.csv
-rw-r--r-- 1 502 staff 551M May 2 2017 order_products__prior.csv
-rw-r--r-- 1 502 staff 24M May 2 2017 order_products__train.csv
-rw-r--r-- 1 502 staff 104M May 2 2017 orders.csv
-rw-r--r-- 1 502 staff 2.1M May 2 2017 products.csv
###Markdown
For each CSV- Load it with pandas- Look at the dataframe's shape- Look at its head (first rows)- `display(example)`- Which columns does it have in common with the example we want to reproduce? aisles
###Code
###Output
_____no_output_____
###Markdown
departments
###Code
###Output
_____no_output_____
###Markdown
order_products__prior
###Code
###Output
_____no_output_____
###Markdown
order_products__train
###Code
###Output
_____no_output_____
###Markdown
orders
###Code
###Output
_____no_output_____
###Markdown
products
###Code
###Output
_____no_output_____
###Markdown
Concatenate order_products__prior and order_products__train
###Code
###Output
_____no_output_____
###Markdown
Get a subset of orders — the first two orders for user id 1 From `orders` dataframe:- user_id- order_id- order_number- order_dow- order_hour_of_day Merge dataframes Merge the subset from `orders` with columns from `order_products`
###Code
###Output
_____no_output_____
###Markdown
Merge with columns from `products`
###Code
###Output
_____no_output_____
###Markdown
Reshape Datasets Why reshape data? Some libraries prefer data in different formatsFor example, the Seaborn data visualization library prefers data in "Tidy" format often (but not always).> "[Seaborn will be most powerful when your datasets have a particular organization.](https://seaborn.pydata.org/introduction.htmlorganizing-datasets) This format ia alternately called “long-form” or “tidy” data and is described in detail by Hadley Wickham. The rules can be simply stated:> - Each variable is a column- Each observation is a row> A helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot." Data science is often about putting square pegs in round holesHere's an inspiring [video clip from _Apollo 13_](https://www.youtube.com/watch?v=ry55--J4_VQ): “Invent a way to put a square peg in a round hole.” It's a good metaphor for data wrangling! Hadley Wickham's ExamplesFrom his paper, [Tidy Data](http://vita.had.co.nz/papers/tidy-data.html)
###Code
%matplotlib inline
import pandas as pd
import numpy as np
import seaborn as sns
table1 = pd.DataFrame(
[[np.nan, 2],
[16, 11],
[3, 1]],
index=['John Smith', 'Jane Doe', 'Mary Johnson'],
columns=['treatmenta', 'treatmentb'])
table2 = table1.T
###Output
_____no_output_____
###Markdown
"Table 1 provides some data about an imaginary experiment in a format commonly seen in the wild. The table has two columns and three rows, and both rows and columns are labelled."
###Code
table1
###Output
_____no_output_____
###Markdown
"There are many ways to structure the same underlying data. Table 2 shows the same data as Table 1, but the rows and columns have been transposed. The data is the same, but the layout is different."
###Code
table2
###Output
_____no_output_____
###Markdown
"Table 3 reorganises Table 1 to make the values, variables and obserations more clear.Table 3 is the tidy version of Table 1. Each row represents an observation, the result of one treatment on one person, and each column is a variable."| name | trt | result ||--------------|-----|--------|| John Smith | a | - || Jane Doe | a | 16 || Mary Johnson | a | 3 || John Smith | b | 2 || Jane Doe | b | 11 || Mary Johnson | b | 1 | Table 1 --> TidyWe can use the pandas `melt` function to reshape Table 1 into Tidy format.
###Code
###Output
_____no_output_____
###Markdown
Table 2 --> Tidy
###Code
##### LEAVE BLANK --an assignment exercise #####
###Output
_____no_output_____
###Markdown
Tidy --> Table 1The `pivot_table` function is the inverse of `melt`.
###Code
###Output
_____no_output_____
###Markdown
Tidy --> Table 2
###Code
##### LEAVE BLANK --an assignment exercise #####
###Output
_____no_output_____
###Markdown
Seaborn exampleThe rules can be simply stated:- Each variable is a column- Each observation is a rowA helpful mindset for determining whether your data are tidy is to think backwards from the plot you want to draw. From this perspective, a “variable” is something that will be assigned a role in the plot."
###Code
sns.catplot(x='trt', y='result', col='name',
kind='bar', data=tidy, height=2);
###Output
_____no_output_____
###Markdown
Now with Instacart data
###Code
products = pd.read_csv('products.csv')
order_products = pd.concat([pd.read_csv('order_products__prior.csv'),
pd.read_csv('order_products__train.csv')])
orders = pd.read_csv('orders.csv')
###Output
_____no_output_____
###Markdown
Goal: Reproduce part of this exampleInstead of a plot with 50 products, we'll just do two — the first products from each list- Half And Half Ultra Pasteurized- Half Baked Frozen Yogurt
###Code
from IPython.display import display, Image
url = 'https://cdn-images-1.medium.com/max/1600/1*wKfV6OV-_1Ipwrl7AjjSuw.png'
example = Image(url=url, width=600)
display(example)
###Output
_____no_output_____
###Markdown
So, given a `product_name` we need to calculate its `order_hour_of_day` pattern. Subset and MergeOne challenge of performing a merge on this data is that the `products` and `orders` datasets do not have any common columns that we can merge on. Due to this we will have to use the `order_products` dataset to provide the columns that we will use to perform the merge.
###Code
###Output
_____no_output_____
###Markdown
4 ways to reshape and plot 1. value_counts
###Code
###Output
_____no_output_____
###Markdown
2. crosstab
###Code
###Output
_____no_output_____
###Markdown
3. Pivot Table
###Code
###Output
_____no_output_____
###Markdown
4. melt
###Code
###Output
_____no_output_____ |
notebooks/Legacy_MCMC_Demo.ipynb | ###Markdown
Notebook to illustrate MCMC sampling of Norms from pre-defined PCFGs *There are the 5 main sections :* **1. Initialising Environment and True Expression** **2. Create Action Data on environment while performing randomised tasks** **3. Run MCMC Algorithms to learn expressions from the data created earlier** **4. Test the performance of MCMC Algorithm by calculating Preciscion and Recall of Learned Norms** **5. Test the convergence of MCMC algorithm**
###Code
#Import the different modules required
from mcmc_norm_learning.environment import *
from mcmc_norm_learning.rules_4 import *
from mcmc_norm_learning.robot_task_new import *
from mcmc_norm_learning.algorithm_1_v4 import create_data,algorithm_1,to_tuple
from mcmc_norm_learning.mcmc_performance import performance
from mcmc_norm_learning.mcmc_convergence import prepare_sequences,calculate_R
import matplotlib.pyplot as plt
from collections import Counter
import pickle
import time
import seaborn as sns
import os
import sys
from tqdm import tnrange, tqdm_notebook
###Output
_____no_output_____
###Markdown
1. Initialising Environment and True Expression* *Environment can be initialisaed with any number of objects (default=20) and a seed value can also be fed.** *Expression refers to Norms initialised from the PCFG, True Expression means the eexpression used to create data which if further used to learn norms from MCMC.*
###Code
actionable = []
while len(actionable) < 4:
env=create_env(N=40)
#fig,ax=plt.subplots(figsize=(8,6))
#plot_env(env,ax,legend=True)
#Dump env to file
with open('./demo/demo_env.sv', 'wb') as fp:
pickle.dump(env, fp)
target_area=[position(-0.8,0.7),position(0.25,0.99)]
task1=task(colour_specific=np.nan,shape_specific=np.nan,target_area=target_area)
rob = robot(task1,env)
actionable = rob.all_actionable()
print(actionable)
fig,ax=plt.subplots(figsize=(8,6))
plot_task(env,ax,"Clearing the highlighted area",task1,True)
#fig.savefig("example_task.pdf", bbox_inches='tight')
true_expression=expand("NORMS")
print_expression(true_expression)
#Dump expression
with open('./demo/demo_exp.sv', 'wb') as fp:
pickle.dump(true_expression, fp)
###Output
_____no_output_____
###Markdown
2. Create Action Data on environment while performing randomised tasksAction data is can be created in two ways (parametrised by random_task):1. Either Initialising a task beforehand and performing it ceratin times (*num_repeat*)2. or, by initialisng random tasks for each iteration in *num_repeat*. In such a case relevance of tasks becomes redundant, and though target are of task is randomised, scope of task (i.e. color and shape of objects) is fixed. And in next step rf must be passed as nan.In both the cases the repetition of tasks is on the original state of environment provided to function
###Code
true_expression = ['Rules', ['Obl', ['Moved', ['Colour', 'any'], ['Shape', 'circle'], ['Zone', '2'], ['Moved', ['Colour', 'g'], ['Shape', 'square'], ['Zone', '2'], ['Next-Move', ['Colour', 'b'], ['Shape', 'triangle']]]], ['Zone', '3']], ['Per', ['Action', 'putdown'], ['Colour', 'g'], ['Shape', 'triangle'], ['PerZone', '1']]]
with open('./demo/demo_exp.sv', 'wb') as fp:
pickle.dump(true_expression, fp)
"""
with open('./demo/demo_env.sv', 'wb') as fp:
env = pickle.load(fp)
with open('./demo/demo_exp.sv', 'rb') as fp:
true_expression = pickle.load(fp)
"""
import mcmc_norm_learning.verify_action_4
import mcmc_norm_learning.rules_4
import mcmc_norm_learning.robot_task_new
import mcmc_norm_learning.working
from importlib import reload
from copy import deepcopy
reload(mcmc_norm_learning.verify_action_4)
reload(mcmc_norm_learning.rules_4)
reload(mcmc_norm_learning.robot_task_new)
reload(mcmc_norm_learning.working)
from mcmc_norm_learning.working import *
from mcmc_norm_learning.robot_task_new import *
from mcmc_norm_learning.verify_action_4 import *
rules = [true_expression[i] for i in range(1, len(true_expression))]
if rules[0][0]=="Pro":
the_pro = rules[0]
print(">>> Prohibition match check for", the_pro, '\n')
for oid in actionable:
get_obj(oid,env).describe()
print(check_pro_or_per(get_obj(oid,env),pro_or_per_action(the_pro),{1:the_pro}))
print()
if rules[0][0]=="Obl":
rule = rules[0]
print(">>> Obligation match check for", rule, '\n')
cond = rule[1]
conds_list = separate_conds(cond)
next_move = conds_list[-1]
for oid in actionable:
get_obj(oid,env).describe()
print(check_obl(get_obj(oid,env),{1:rule}))
print()
if len(rules) > 1:
the_per = rules[1]
print(">>> Permission match check for", the_per,'\n')
for oid in actionable:
get_obj(oid,env).describe()
print(check_pro_or_per(get_obj(oid,env),pro_or_per_action(the_per),{1:the_per}))
print()
rob = robot(task1,deepcopy(env))
ac = rob.all_compliant(rules,"foo")
print(ac)
from working import unless_moves
print("List of unless moves")
print(list(unless_moves(ac)))
order_constrained_subseqs = []
for um in unless_moves(ac):
obl_obj_id, obl_obj_zones, moved_conds, pairs_dict = um
mss = list(matching_subseqs(moved_conds, pairs_dict, obl_obj_id, obl_obj_zones, env))
order_constrained_subseqs.extend(mss)
print("order_constrained_subseqs:")
print(order_constrained_subseqs)
for x in violating_sub_permutations(order_constrained_subseqs):
print(set(x))
s=time.time()
action_profile_with_norms=create_data(true_expression,env,name="demo",task1=task1,random_task=False,
num_actionable=np.nan,num_repeat=250,verbose=False)
print ("Time Taken to complete job={:.2f}s\n".format(time.time()-s))
data=[]
for itr,ap in action_profile_with_norms.items():
for i in range(0,int(len(ap)/2)):
data.append(tuple([ap[2*i],ap[2*i+1]]))
print ("Data Generated:")
for i in range(5):
print(data[i])
with open('./demo/demo_data.sv', 'wb') as fp:
pickle.dump(data, fp)
###Output
_____no_output_____
###Markdown
3. Run MCMC Algorithm to learn expressions from the data created earlier1. rf is the relevance discounting factor for irrelevant expressions.2. sim_t is the similarity threshold: cos(E1,E2) above which p_accept is penalised.3. sim_pen is the penalty imposed if the above threshold is crossed.
###Code
# Different parameters for MCMC sequence
rf=0.5 #To negate relevance logic use np.nan
sim_t=0.8
sim_pen=0.7 #To negate similarity logic use 1
s=time.time()
print ("Generating sequence")
exp_seq,lik_list=algorithm_1(data,env,task1,true_expression,q_dict,rule_dict,
filename="demo/demo_mcmc_report",
sim_threshold=sim_t,similarity_penalty=sim_pen,
relevance_factor=rf,max_iterations=50000,verbose=False)
print ("\nTime Taken to complete job={:.2f}s\n".format(time.time()-s))
learned_expressions=Counter(map(to_tuple,exp_seq[int(len(exp_seq)/2)+1:]))#Discarding 1 half as warmup
print ("Number of unique Norms in sequence={}".format(len(learned_expressions)))
# Write top norms to file
filename="demo_top_norms"
top=learned_expressions.most_common()
t=sum(learned_expressions.values())
exists = os.path.isfile('./demo/{}.txt'.format(filename))
if exists==True:
os.remove('./demo/{}.txt'.format(filename))
original = sys.stdout
for i in range(len(top)):
exp=top[i]
if (i%10==0):
print("Rank:{} Norm has relative frequency={:.3f}%".format(i+1,exp[1]*100/t))
sys.stdout = open('./demo/{}.txt'.format(filename), 'a+')
print("\n\n\n************Rank:{}, %-Frequency={:.3f}%**********".format(i+1,exp[1]*100/t))
print_expression(exp[0])
print("*************************************************")
sys.stdout=original
# Visualise the frequency of top plots
sns.set_style("darkgrid")
fig,ax=plt.subplots(figsize=(8,6),dpi=100)
fig.suptitle('Frequency of Norms in the generated sequence for relevance_factor={}'.format(rf))
t_l=sum(learned_expressions.values())
ax.plot([x*100/t for x in sorted(learned_expressions.values())],"o-",c=(250/255,93/255,130/255,0.7),markerfacecolor=(250/255,18/255,72/255,0.77))
ax.set_ylabel("%-Frequency in sample of size={}".format(t))
ax.set_xlabel("Descending Frequency Rank of Norms from a total of {} Norms".format(len(learned_expressions)))
ax.title.set_text("Weak Inequality check for E_0:\nlog_Likelihood(expression)>=log_Likelihood(No-Norm)")
obl_rank=[] #Ascending order Rank
for rank,x in enumerate(learned_expressions.most_common(),1):
if x[0][1][0] =="Obl":
obl_rank.append(rank)
for rank in obl_rank:
ax.scatter(x=len(learned_expressions)-rank,
y=sorted(learned_expressions.values())[len(learned_expressions)-rank]*100/t,
c='green',s=151,marker='p',alpha=0.88,label='Obligatory,Rank={}'.format(rank))
ax.legend();
###Output
_____no_output_____
###Markdown
4. Calculate Preciscion and Recall for the Learned Norms * Precision = $\frac{|\ true-data\ ∩\ predicted-data\ |}{|\ true-data\ |}$ * Recall = $\frac{|\ true-data\ ∩\ predicted-data\ |}{|\ predicted-data\ |}$ * F_beta = $\frac{(1+\beta^2)\ . \ (precision\ *\ recall)}{(\beta^2.precision\ +\ recall)}$ where, * True Data = All Possible Action Profiles that can be produced by true/trace expression * Predicted Data = All Possible Action Profiles that can be produced by learned expression
###Code
# Calculate precision and recall of top_n norms from learned expressions
pr_result=performance(task1,env,true_expression,learned_expressions,
folder_name="demo",file_name="top_norm",
top_n=np.nan,beta=1,verbose=False)
pr_result.head()
###Output
_____no_output_____
###Markdown
5. Test the convergence of MCMC Chain
###Code
n=10000 #Length of sequence after discarding warm-up part and splitting in half
m=10 #Number of sequences after splitting in half
sequence_list=[]
for i in tnrange(1,int(m/2+1),desc="Loop for Individual Chains"):
print ("\n:::::::::::::::::::: FOR SEQUENCE {} ::::::::::::::::::::".format(i))
exp_seq,lik_list=algorithm_1(data,env,task1,true_expression,q_dict,rule_dict,
filename="demo/convergence/report_for_chain_{}".format(i),
sim_threshold=sim_t,similarity_penalty=sim_pen,
relevance_factor=rf,max_iterations=4*n,verbose=False)
sequence_list.append(exp_seq)
convergence_result=calculate_R(prepare_sequences(sequence_list,warmup=True),50)
convergence_result
###Output
_____no_output_____ |
Week4-2-2-PeerAssign-v5-py.ipynb | ###Markdown
Assignment: Notebook for Peer Assignment IntroductionUsing this Python notebook you will:1. Understand 3 Chicago datasets 1. Load the 3 datasets into 3 tables in a Db2 database1. Execute SQL queries to answer assignment questions Understand the datasets To complete the assignment problems in this notebook you will be using three datasets that are available on the city of Chicago's Data Portal:1. Socioeconomic Indicators in Chicago1. Chicago Public Schools1. Chicago Crime Data 1. Socioeconomic Indicators in ChicagoThis dataset contains a selection of six socioeconomic indicators of public health significance and a “hardship index,” for each Chicago community area, for the years 2008 – 2012.For this assignment you will use a snapshot of this dataset which can be downloaded from:https://ibm.box.com/shared/static/05c3415cbfbtfnr2fx4atenb2sd361ze.csvA detailed description of this dataset and the original dataset can be obtained from the Chicago Data Portal at:https://data.cityofchicago.org/Health-Human-Services/Census-Data-Selected-socioeconomic-indicators-in-C/kn9c-c2s2 2. Chicago Public SchoolsThis dataset shows all school level performance data used to create CPS School Report Cards for the 2011-2012 school year. This dataset is provided by the city of Chicago's Data Portal.For this assignment you will use a snapshot of this dataset which can be downloaded from:https://ibm.box.com/shared/static/f9gjvj1gjmxxzycdhplzt01qtz0s7ew7.csvA detailed description of this dataset and the original dataset can be obtained from the Chicago Data Portal at:https://data.cityofchicago.org/Education/Chicago-Public-Schools-Progress-Report-Cards-2011-/9xs2-f89t 3. Chicago Crime Data This dataset reflects reported incidents of crime (with the exception of murders where data exists for each victim) that occurred in the City of Chicago from 2001 to present, minus the most recent seven days. This dataset is quite large - over 1.5GB in size with over 6.5 million rows. For the purposes of this assignment we will use a much smaller sample of this dataset which can be downloaded from:https://ibm.box.com/shared/static/svflyugsr9zbqy5bmowgswqemfpm1x7f.csvA detailed description of this dataset and the original dataset can be obtained from the Chicago Data Portal at:https://data.cityofchicago.org/Public-Safety/Crimes-2001-to-present/ijzp-q8t2 Download the datasetsIn many cases the dataset to be analyzed is available as a .CSV (comma separated values) file, perhaps on the internet. Click on the links below to download and save the datasets (.CSV files):1. __CENSUS_DATA:__ https://ibm.box.com/shared/static/05c3415cbfbtfnr2fx4atenb2sd361ze.csv1. __CHICAGO_PUBLIC_SCHOOLS__ https://ibm.box.com/shared/static/f9gjvj1gjmxxzycdhplzt01qtz0s7ew7.csv1. __CHICAGO_CRIME_DATA:__ https://ibm.box.com/shared/static/svflyugsr9zbqy5bmowgswqemfpm1x7f.csv__NOTE:__ Ensure you have downloaded the datasets using the links above instead of directly from the Chicago Data Portal. The versions linked here are subsets of the original datasets and have some of the column names modified to be more database friendly which will make it easier to complete this assignment. Store the datasets in database tablesTo analyze the data using SQL, it first needs to be stored in the database.While it is easier to read the dataset into a Pandas dataframe and then PERSIST it into the database as we saw in Week 3 Lab 3, it results in mapping to default datatypes which may not be optimal for SQL querying. For example a long textual field may map to a CLOB instead of a VARCHAR. Therefore, __it is highly recommended to manually load the table using the database console LOAD tool, as indicated in Week 2 Lab 1 Part II__. The only difference with that lab is that in Step 5 of the instructions you will need to click on create "(+) New Table" and specify the name of the table you want to create and then click "Next". Now open the Db2 console, open the LOAD tool, Select / Drag the .CSV file for the first dataset, Next create a New Table, and then follow the steps on-screen instructions to load the data. Name the new tables as folows:1. __CENSUS_DATA__1. __CHICAGO_PUBLIC_SCHOOLS__1. __CHICAGO_CRIME_DATA__ Connect to the database Let us first load the SQL extension and establish a connection with the database
###Code
%load_ext sql
###Output
_____no_output_____
###Markdown
In the next cell enter your db2 connection string. Recall you created Service Credentials for your Db2 instance in first lab in Week 3. From the __uri__ field of your Db2 service credentials copy everything after db2:// (except the double quote at the end) and paste it in the cell below after ibm_db_sa://
###Code
# Remember the connection string is of the format:
# %sql ibm_db_sa://my-username:my-password@my-hostname:my-port/my-db-name
# Enter the connection string for your Db2 on Cloud database instance below
%sql ibm_db_sa://wgh83952:7hkv9gzxq%[email protected]:50000/BLUDB
###Output
_____no_output_____
###Markdown
ProblemsNow write and execute SQL queries to solve assignment problems Problem 1 Find the total number of crimes recorded in the CRIME table
###Code
%sql select count(ID) from CRIMEDATACHICAGO;
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 2 Retrieve first 10 rows from the CRIME table
###Code
%sql select * from CRIMEDATACHICAGO limit 10;
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 3 How many crimes involve an arrest?
###Code
%sql select count(arrest) from CRIMEDATACHICAGO where arrest='TRUE';
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 4 Which unique types of crimes have been recorded at GAS STATION locations? Hint: Which column lists types of crimes e.g. THEFT? Problem 5 In the CENUS_DATA table list all Community Areas whose names start with the letter ‘B’.
###Code
%sql select distinct(PRIMARY_TYPE), LOCATION_DESCRIPTION from CRIMEDATACHICAGO where LOCATION_DESCRIPTION='GAS STATION';
%sql select COMMUNITY_AREA_NAME from SOCIOECONOICDATA where COMMUNITY_AREA_NAME like 'B%';
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 6 Which schools in Community Areas 10 to 15 are healthy school certified?
###Code
%sql SELECT NAME_OF_SCHOOL FROM PUBLICSCHOOLSCHICAGO WHERE HEALTHY_SCHOOL_CERTIFIED = 'Yes' AND COMMUNITY_AREA_NUMBER BETWEEN 10 and 15
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 7 What is the average school Safety Score?
###Code
%sql select avg(SAFETY_SCORE) as avg_school_safety from PUBLICSCHOOLSCHICAGO;
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 8 List the top 5 Community Areas by average College Enrollment [number of students]
###Code
%sql select COMMUNITY_AREA_NAME , avg(COLLEGE_ENROLLMENT) as AVG_ENROLLMENT from PUBLICSCHOOLSCHICAGO \
group by COMMUNITY_AREA_NAME order by AVG_ENROLLMENT desc limit 5
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 9 Use a sub-query to determine which Community Area has the least value for school Safety Score?
###Code
%sql select COMMUNITY_AREA_NAME from PUBLICSCHOOLSCHICAGO \
where SAFETY_SCORE = (select min(SAFETY_SCORE) from PUBLICSCHOOLSCHICAGO)
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
###Markdown
Problem 10 [Without using an explicit JOIN operator] Find the Per Capita Income of the Community Area which has a school Safety Score of 1.
###Code
%sql select A.PER_CAPITA_INCOME from SOCIOECONOICDATA A, PUBLICSCHOOLSCHICAGO B \
where A.COMMUNITY_AREA_NUMBER = B.COMMUNITY_AREA_NUMBER and B.SAFETY_SCORE = 1;
###Output
* ibm_db_sa://wgh83952:***@dashdb-txn-sbox-yp-lon02-01.services.eu-gb.bluemix.net:50000/BLUDB
Done.
|
source/lesson07/script.ipynb | ###Markdown
Predicting the Sunshine Hours TomorrowA regression task on the weather prediction dataset
###Code
import pandas as pd
df = pd.read_csv("https://zenodo.org/record/5071376/files/weather_prediction_dataset.csv")
df.describe()
df.shape
df.columns
print(set({ "_".join(x.split("_")[1:]) for x in df.columns if x not in ["MONTH", "DATE"] }))
nr_rows = 365*3
X_data = df.loc[:nr_rows].drop(columns=["DATE", "MONTH"])
X_data.shape
print([ item for item in df.columns if "BASEL" in item])
y_data = df.loc[1:(nr_rows+1)]["BASEL_sunshine"]
y_data.dtype
from sklearn.model_selection import train_test_split
X_train, X_not_train, y_train, y_not_train = train_test_split(X_data, y_data, test_size=.3, random_state=20211013)
X_val, X_test, y_val, y_test = train_test_split(X_not_train, y_not_train, test_size=.5, random_state=20211014)
print(X_train.shape)
print(X_test.shape, X_val.shape)
from tensorflow import keras
def create_nn():
inputs = keras.Input(shape=(X_data.shape[1],), name="input")
dense1 = keras.layers.Dense(100, 'relu', name="dense1")(inputs)
dense2 = keras.layers.Dense(50, 'relu', name="dense2")(dense1)
outputs = keras.layers.Dense(1)(dense2)
return keras.Model(inputs=inputs, outputs=outputs, name="weather_prediction_model")
model = create_nn()
model.summary()
model.compile(optimizer="adam",
loss="mse", #MEAN SQUARED ERROR
metrics=[keras.metrics.RootMeanSquaredError()])
history = model.fit(X_train, y_train,
batch_size = 32,
epochs=200,
verbose=2
)
import seaborn as sns
import matplotlib.pyplot as plt
history_df = pd.DataFrame.from_dict(history.history)
history_df.columns
sns.lineplot(data=history_df['root_mean_squared_error'])
plt.xlabel("epochs")
plt.ylabel("RMSE")
y_train_predict = model.predict(X_train)
y_test_predict = model.predict(X_test)
fig, axes = plt.subplots(1, 2, figsize=(12, 6))
axes[0].scatter(y_train_predict, y_train, s=10, alpha=0.5, color="teal")
axes[0].set_title("training set")
axes[0].set_xlabel("predicted sunshine hours")
axes[0].set_ylabel("true sunshine hours")
axes[1].scatter(y_test_predict, y_test, s=10, alpha=0.5, color="teal")
axes[1].set_title("test set")
axes[1].set_xlabel("predicted sunshine hours")
axes[1].set_ylabel("true sunshine hours")
loss_train, rmse_train = model.evaluate(X_train, y_train)
loss_test, rmse_test = model.evaluate(X_test, y_test)
print("training set",rmse_train)
print(" test set",rmse_test)
###Output
training set 0.5583330392837524
test set 4.1030802726745605
###Markdown
Yikes Overfitting ! Set expectations: How difficult is the defined problem?
###Code
y_baseline_prediction = X_test["BASEL_sunshine"]
plt.figure(figsize=(5,5), dpi=100)
plt.scatter(y_baseline_prediction, y_test, s=10, alpha=.5)
plt.xlabel("sunshine hours yesterday (baseline)")
plt.ylabel("true sunshine hours")
from sklearn.metrics import mean_squared_error
rmse_nn = mean_squared_error(y_test, y_test_predict, squared=False)
rmse_baseline = mean_squared_error(y_test, y_baseline_prediction , squared=False)
print("training set",rmse_nn)
print("baseline set",rmse_baseline)
model = create_nn()
model.compile(optimizer='adam',
loss='mse',
metrics=[keras.metrics.RootMeanSquaredError()])
history = model.fit(X_train, y_train,
batch_size=32,
epochs=200,
validation_data=(X_val, y_val),
verbose=2)
history_df = pd.DataFrame.from_dict(history.history)
sns.lineplot(data=history_df[['root_mean_squared_error','val_root_mean_squared_error']])
plt.xlabel("epochs")
plt.ylabel("RMSE")
###Output
_____no_output_____
###Markdown
Counteract model overfittingreduce the number of parameters of our model
###Code
def create_nn(nodes1, nodes2):
inputs = keras.Input(shape=(X_data.shape[1],), name="input")
dense1 = keras.layers.Dense(nodes1, 'relu', name="dense1")(inputs)
dense2 = keras.layers.Dense(nodes2, 'relu', name="dense2")(dense1)
outputs = keras.layers.Dense(1)(dense2)
return keras.Model(inputs=inputs, outputs=outputs, name="weather_prediction_model")
model = create_nn(10,5)
model.summary()
model.compile(optimizer='adam',
loss='mse',
metrics=[keras.metrics.RootMeanSquaredError()])
history = model.fit(X_train, y_train,
batch_size = 32,
epochs = 200,
validation_data=(X_val, y_val),
verbose = 2)
history_df = pd.DataFrame.from_dict(history.history)
sns.lineplot(data=history_df[['root_mean_squared_error', 'val_root_mean_squared_error']])
plt.xlabel("epochs")
plt.ylabel("RMSE")
model = create_nn(100, 50)
model.compile(optimizer='adam',
loss='mse',
metrics=[keras.metrics.RootMeanSquaredError()])
from tensorflow.keras.callbacks import EarlyStopping
earlystop = EarlyStopping(monitor='val_loss',
patience=10,verbose=1)
history = model.fit(X_train, y_train,
batch_size = 32,
epochs=200,
validation_data=(X_val, y_val),
callbacks=[earlystop],
verbose=2)
history_df = pd.DataFrame.from_dict(history.history)
sns.lineplot(data=history_df[['root_mean_squared_error', 'val_root_mean_squared_error']])
plt.xlabel("epochs")
plt.ylabel("RMSE")
###Output
_____no_output_____
###Markdown
Further techniques of interest:- batchnormalisation- dropout layers
###Code
###Output
_____no_output_____ |
notebooks/train_uncertainty_with_existing.ipynb | ###Markdown
Fine tune the prediction part on 2017 data (the validation set).
###Code
# Load trained model.
file_model = './models/model_45.h5'
model_base = load_model(file_model)
# Fine tune.
opt = Adam(lr=0.00001)
model_base.compile(loss='mae', optimizer=opt, metrics=['mae', 'mse'])
history = model_base.fit(valid, epochs=5, verbose= 1 if INTERACTIVE else 2)
# Loss initially at mse 0.15, more than 0.13 because of dropout.
model_base.save('model_45_tuned.h5')
# Add back output that predicts uncertainty.
base_inputs = model_base.input
base_penultimate = model_base.get_layer('dense_7').output
base_output = model_base.output
model_std = load_model('./model_45_std.h5', custom_objects={'Gaussian_NLL':Gaussian_NLL, 'Gaussian_MSE': Gaussian_MSE})
x = model_std.get_layer('std_hidden')(base_penultimate)
std_output = model_std.get_layer('std_output')(x)
output = concatenate([base_output, std_output], axis=-1)
model = Model(inputs=base_inputs, outputs=output)
# Compile and save.
opt = Adam(lr=0.0001)
from keras_extras.losses.dirichlet import Gaussian_NLL, Gaussian_MSE
model.compile(loss=Gaussian_NLL, optimizer=opt, metrics=[Gaussian_MSE])
model.save('model_45_std_tuned.h5')
# Make predictions with uncertainty estimates.
from tqdm import tqdm
def predict(model, dataset):
ys, yhats = [], []
for batch in dataset:
inputs, y = batch
yhat = model.predict_on_batch(inputs)
if y is not None:
y = y.reshape(-1,2)
else:
y = np.zeros((yhat.shape[0], 1))
ys.append(y)
yhats.append(yhat)
yhat = np.vstack(yhats)
y = np.vstack(ys)
return y, yhat
def define_groups():
groups = {}
for isat in range(2):
for year in [2019]:
for imonth in range(12):
sat = 'A' if isat==1 else 'B'
month = imonth+1
name = f'S1{sat}_{year}{month:02d}S'
groups[name] = (isat, year, month)
return groups
# Dataset
filename = '/home/psadow/lts/preserve/stopa/sar_hs/data/alt/sar_hs_2019.h5' # Contains all processed 2019 data.
for group, (isat, year, month) in tqdm(define_groups().items()):
# Make predictions for this group.
test = sarhs.generator.SARGenerator(filename, subgroups=[group], batch_size=200)
#print(test._num_examples())
_, yhat = predict(model,test)
# The predictions should be in order.
# Include longitude, latitude, time, and file name.
df = pd.DataFrame()
df['hsNN'] = yhat[:,0]
df['hsNN_std'] = yhat[:,1]
df['timeSAR'] = test.h5file[group]['timeSAR'][:].flatten()
df['latSAR'] = test.h5file[group]['latlonSAR'][:, 0]
df['lonSAR'] = test.h5file[group]['latlonSAR'][:, 1]
Path("./predictions_std_tuned/").mkdir(parents=True, exist_ok=True)
df.to_csv(f'predictions_std_tuned/{group}.csv', index=False, )
print('Done')
print(df.columns)
###Output
100%|██████████| 24/24 [09:18<00:00, 23.27s/it]
###Markdown
Fine tune the prediction part on 2017 data (the validation set).
###Code
# Load trained model.
file_model = './models/model_45.h5'
model_base = load_model(file_model)
# Fine tune.
opt = Adam(lr=0.00001)
model_base.compile(loss='mae', optimizer=opt, metrics=['mae', 'mse'])
history = model_base.fit(valid, epochs=5, verbose= 1 if INTERACTIVE else 2)
# Loss initially at mse 0.15, more than 0.13 because of dropout.
model_base.save('model_45_tuned.h5')
# Add back output that predicts uncertainty.
base_inputs = model_base.input
base_penultimate = model_base.get_layer('dense_7').output
base_output = model_base.output
model_std = load_model('./model_45_std.h5', custom_objects={'Gaussian_NLL':Gaussian_NLL, 'Gaussian_MSE': Gaussian_MSE})
x = model_std.get_layer('std_hidden')(base_penultimate)
std_output = model_std.get_layer('std_output')(x)
output = concatenate([base_output, std_output], axis=-1)
model = Model(inputs=base_inputs, outputs=output)
# Compile and save.
opt = Adam(lr=0.0001)
from keras_extras.losses.dirichlet import Gaussian_NLL, Gaussian_MSE
model.compile(loss=Gaussian_NLL, optimizer=opt, metrics=[Gaussian_MSE])
model.save('model_45_std_tuned.h5')
# Make predictions with uncertainty estimates.
from tqdm import tqdm
def predict(model, dataset):
ys, yhats = [], []
for batch in dataset:
inputs, y = batch
yhat = model.predict_on_batch(inputs)
if y is not None:
y = y.reshape(-1,2)
else:
y = np.zeros((yhat.shape[0], 1))
ys.append(y)
yhats.append(yhat)
yhat = np.vstack(yhats)
y = np.vstack(ys)
return y, yhat
def define_groups():
groups = {}
for isat in range(2):
for year in [2019]:
for imonth in range(12):
sat = 'A' if isat==1 else 'B'
month = imonth+1
name = f'S1{sat}_{year}{month:02d}S'
groups[name] = (isat, year, month)
return groups
# Dataset
filename = '/home/psadow/lts/preserve/stopa/sar_hs/data/alt/sar_hs_2019.h5' # Contains all processed 2019 data.
for group, (isat, year, month) in tqdm(define_groups().items()):
# Make predictions for this group.
test = sarhs.generator.SARGenerator(filename, subgroups=[group], batch_size=200)
#print(test._num_examples())
_, yhat = predict(model,test)
# The predictions should be in order.
# Include longitude, latitude, time, and file name.
df = pd.DataFrame()
df['hsNN'] = yhat[:,0]
df['hsNN_std'] = yhat[:,1]
df['timeSAR'] = test.h5file[group]['timeSAR'][:].flatten()
df['latSAR'] = test.h5file[group]['latlonSAR'][:, 0]
df['lonSAR'] = test.h5file[group]['latlonSAR'][:, 1]
Path("./predictions_std_tuned/").mkdir(parents=True, exist_ok=True)
df.to_csv(f'predictions_std_tuned/{group}.csv', index=False, )
print('Done')
print(df.columns)
###Output
100%|██████████| 24/24 [09:18<00:00, 23.27s/it] |
geoaidsvm/04_Apply_trained_model_in_ArcGIS_Pro.ipynb | ###Markdown
Apply a trained land classifier model in ArcGIS Pro This tutorial will assume that you have already provisioned a [Geo AI Data Science Virtual Machine](http://aka.ms/dsvm/GeoAI) and are using this Jupyter notebook while connected via remote desktop on that VM. If not, please see our guide to [provisioning and connecting to a Geo AI DSVM](https://github.com/Azure/pixel_level_land_classification/blob/master/geoaidsvm/setup.md).By default, this tutorial will make use of a model we have pre-trained for 250 epochs. If you have completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), you will have the option of using your own model file. Setup instructions Log into ArcGIS Pro[ArcGIS Pro](https://pro.arcgis.com) 2.1.1 is pre-installed on the Geo AI DSVM. If you are running this tutorial on another machine, you may need to perform these additional steps: install ArcGIS Pro, [install CNTK](https://docs.microsoft.com/cognitive-toolkit/setup-windows-python) in the Python environment ArcGIS Pro creates, and ensure that [ArcGIS Pro's Python environment](http://pro.arcgis.com/en/pro-app/arcpy/get-started/installing-python-for-arcgis-pro.htm) is on your system path.To log into ArcGIS Pro, follow these steps:1. Search for and launch the ArcGIS Pro program.1. When prompted, enter your username and password. - If you don't have an ArcGIS Pro license, see the instructions for getting a trial license in the [intro notebook](./01_Intro_to_pixel-level_land_classification.ipynb). Install the supporting filesIf you have not already completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), execute the following cell to download supporting files to your Geo AI DSVM's D: drive.
###Code
!AzCopy /Source:https://aiforearthcollateral.blob.core.windows.net/imagesegmentationtutorial /SourceSAS:"?st=2018-01-16T10%3A40%3A00Z&se=2028-01-17T10%3A40%3A00Z&sp=rl&sv=2017-04-17&sr=c&sig=KeEzmTaFvVo2ptu2GZQqv5mJ8saaPpeNRNPoasRS0RE%3D" /Dest:D:\pixellevellandclassification /S
print('Done.')
###Output
_____no_output_____
###Markdown
Apply a trained land classifier model in ArcGIS Pro This tutorial will assume that you have already provisioned a [Geo AI Data Science Virtual Machine](http://aka.ms/dsvm/GeoAI) and are using this Jupyter notebook while connected via remote desktop on that VM. If not, please see our guide to [provisioning and connecting to a Geo AI DSVM](https://github.com/Azure/pixel_level_land_classification/blob/master/geoaidsvm/setup.md).By default, this tutorial will make use of a model we have pre-trained for 250 epochs. If you have completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), you will have the option of using your own model file. Setup instructions Log into ArcGIS Pro[ArcGIS Pro](https://pro.arcgis.com) 2.1.1 is pre-installed on the Geo AI DSVM. If you are running this tutorial on another machine, you may need to perform these additional steps: install ArcGIS Pro, [install CNTK](https://docs.microsoft.com/cognitive-toolkit/setup-windows-python) in the Python environment ArcGIS Pro creates, and ensure that [ArcGIS Pro's Python environment](http://pro.arcgis.com/en/pro-app/arcpy/get-started/installing-python-for-arcgis-pro.htm) is on your system path.To log into ArcGIS Pro, follow these steps:1. Search for and launch the ArcGIS Pro program.1. When prompted, enter your username and password. - If you don't have an ArcGIS Pro license, see the instructions for getting a trial license in the [intro notebook](./01_Intro_to_pixel-level_land_classification.ipynb). Install the supporting filesIf you have not already completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), execute the following cell to download supporting files to your Geo AI DSVM's D: drive.
###Code
!AzCopy /Source:https://aiforearthcollateral.blob.core.windows.net/imagesegmentationtutorial /SourceSAS:"?st=2018-01-16T10%3A40%3A00Z&se=2028-01-17T10%3A40%3A00Z&sp=rl&sv=2017-04-17&sr=c&sig=KeEzmTaFvVo2ptu2GZQqv5mJ8saaPpeNRNPoasRS0RE%3D" /Dest:D:\pixellevellandclassification /S
print('Done.')
###Output
_____no_output_____
###Markdown
Apply a trained land classifier model in ArcGIS Pro This tutorial will assume that you have already provisioned a [Geo AI Data Science Virtual Machine](http://aka.ms/dsvm/GeoAI) and are using this Jupyter notebook while connected via remote desktop on that VM. If not, please see our guide to [provisioning and connecting to a Geo AI DSVM](https://github.com/Azure/pixel_level_land_classification/blob/master/geoaidsvm/setup.md).By default, this tutorial will make use of a model we have pre-trained for 250 epochs. If you have completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), you will have the option of using your own model file. Setup instructions Log into ArcGIS Pro[ArcGIS Pro](https://pro.arcgis.com) 2.1.1 is pre-installed on the Geo AI DSVM. If you are running this tutorial on another machine, you may need to perform these additional steps: install ArcGIS Pro, [install CNTK](https://docs.microsoft.com/cognitive-toolkit/setup-windows-python) in the Python environment ArcGIS Pro creates, and ensure that [ArcGIS Pro's Python environment](http://pro.arcgis.com/en/pro-app/arcpy/get-started/installing-python-for-arcgis-pro.htm) is on your system path.To log into ArcGIS Pro, follow these steps:1. Search for and launch the ArcGIS Pro program.1. When prompted, enter your username and password. - If you don't have an ArcGIS Pro license, see the instructions for getting a trial license in the [intro notebook](./01_Intro_to_pixel-level_land_classification.ipynb). Install the supporting filesIf you have not already completed the associated notebook on [training a land classifier from scratch](./02_Train_a_land_classification_model_from_scratch.ipynb), execute the following cell to download supporting files to your Geo AI DSVM's D: drive.
###Code
!AzCopy /Source:https://ai4ehackathons.blob.core.windows.net/landcovertutorial /SourceSAS:"?se=2020-04-06T06%3A59%3A00Z&sp=rl&sv=2018-03-28&sr=c&sig=YD6mbqnmYTW%2Bs6guVndjQSQ8NUcV8F9HY%2BhPNWiulIo%3D" /Dest:D:\pixellevellandclassification /S
print('Done.')
###Output
_____no_output_____ |
Capella-API-Open-Data-AME-search-order-and-download.ipynb | ###Markdown
Capella API: Search, Order, and Download Africa - Middle East Region
###Code
# Required libraries:
# requests
# json
# urllib
###Output
_____no_output_____
###Markdown
Your username and password must be saved in a .json file named 'credentials.json' and formatted as follows.{"username": "yourusername","password": "xxxxxxxxx"} Set up Project Variables
###Code
data_collection = ["capella-open-data"]
# Africa - Middle East AOI
aoi = {
"type": "Polygon",
"coordinates": [
[
[
-25.6640625,
28.304380682962783
],
[
-21.796875,
0.7031073524364909
],
[
2.109375,
-6.315298538330033
],
[
15.1171875,
-38.8225909761771
],
[
41.484375,
-38.54816542304657
],
[
89.296875,
18.646245142670608
],
[
114.2578125,
17.644022027872726
],
[
127.61718749999999,
30.44867367928756
],
[
138.515625,
50.736455137010665
],
[
120.234375,
56.75272287205736
],
[
69.60937499999999,
57.70414723434193
],
[
40.42968749999999,
50.28933925329178
],
[
2.8125,
39.90973623453719
],
[
-25.6640625,
28.304380682962783
]
]
]
}
###Output
_____no_output_____
###Markdown
Import required libraries, build a print utility function, assign API endpoints and load Credentials
###Code
import requests
import json
# JSON utility function
def p(data):
print(json.dumps(data, indent=2))
# Capella API endpoints
URL = 'https://api.capellaspace.com'
token = '/token'
collections = '/catalog/collections'
catsearch = '/catalog/search'
orders = '/orders/'
#Load username and password
with open('credentials.json') as f:
data = json.load(f)
username = data['username']
password = data['password']
###Output
_____no_output_____
###Markdown
Get and Print Access Token
###Code
#Get the token
r = requests.post(URL + token,
headers = {'Content-Type': 'application/x-www-form-urlencoded'}, auth=(username,password))
accesstoken = r.json()["accessToken"]
# Print the token
#print("Access Token: " + accesstoken)
headers = {'Authorization':'Bearer ' + accesstoken}
###Output
_____no_output_____
###Markdown
Print Available Collections
###Code
# See what collections are available
r = requests.get(URL + collections, headers=headers)
# Print the results
#p(r.json())
###Output
_____no_output_____
###Markdown
Post Search Filters, Print the Results
###Code
# Post search filters
filters = {
#"bbox": [-180,-90,180,90], # lower left coodinate and upper right coordinate, in decimal degrees
"intersects": aoi,
"limit": 1000, # overwrite the default pagination limit of 10, adjust as necessary
"collections": data_collection, #["capella-open-data"], # specify the desired collection "sentinel-s1-l2"
"sortby": "properties.datetime"
}
headers = {'Content-Type': 'application/json',
'Accept': 'application/geo+json', 'Authorization':'Bearer ' + accesstoken}
r = requests.post(URL + catsearch, json=filters, headers=headers)
# Inspect the results
#p(r.json())
###Output
_____no_output_____
###Markdown
Make and Post an Order
###Code
# Make an Order
features = r.json()["features"]
granulelist = []
# Loop over all the features from the response and add to an array for an order
for f in features:
item = {"CollectionId": f["collection"], "GranuleId": f["id"]}
granulelist.append(item)
cnt = len(features)
print(cnt)
myorder = {"Items": granulelist}
# Post the order and inspect the result
r = requests.post(URL + orders, json=myorder, headers=headers)
#p(r.json())
###Output
_____no_output_____
###Markdown
Get the STAC records with the signed URLs using the /download endpoint, Print the Result
###Code
myorderid = r.json()["orderId"]
r = requests.get(URL + orders + myorderid + '/download', headers=headers)
#p(r.json())
###Output
_____no_output_____
###Markdown
Download the Results
###Code
features = r.json()
basefp = 'C:/data/open_data/' # Local directory to save data
for feature in features:
filepath = feature["assets"]["HH"]["href"] # the second nested dictionary ("HH" here) must be changed for different assets
# e.g. filepath = feature["assets"]["metadata"]["href"] will return the url for the metadata file
filename = filepath[filepath.rfind("/")+1:]
sep = "?"
truncname = filename.split(sep, 1)[0]
outfp = basefp + truncname
import urllib
f = urllib.request.urlretrieve(filepath, outfp)
with requests.get(filepath, stream=True) as result:
result.raise_for_status()
with open(outfp, 'wb') as f:
for chunk in result.iter_content(chunk_size=10000000):
f.write(chunk)
###Output
_____no_output_____ |
lab 3/knn_using_sklearn.ipynb | ###Markdown
k-Nearest Neighbour Classifier using sklearn=================================
###Code
import pandas as pd
import numpy as np
from matplotlib import pyplot as plt
from sklearn.neighbors import KNeighborsClassifier
from sklearn.metrics import accuracy_score, confusion_matrix
#Loading data and preprocessing
url='http://archive.ics.uci.edu/ml/machine-learning-databases/iris/iris.data'
df=pd.read_csv(url)
df.columns=['sepal_length','sepal_width','petal_length','petal_width','flower_type']
df['flower_type'] = df['flower_type'].astype('category')
df.flower_type = df.flower_type.cat.rename_categories([0,1,2])
D=df.values
# Get the labelled set
c1=D[:20,:]; c2=D[50:70,:]; c3=D[100:120,:]
trainSet = np.concatenate((c1,c2,c3),axis=0)
# Get the testing set
c1 = D[21:50,:]; c2=D[71:100,:]; c3=D[121:,:]
testSet = np.concatenate((c1,c2,c3),axis=0)
xTrain=trainSet[:,:-1]; yTrain=trainSet[:,-1]
xTest=testSet[:,:-1]; yTest=testSet[:,-1]
# create a knn classifier with K=3
clf = KNeighborsClassifier(n_neighbors=3)
clf.fit(xTrain, yTrain.astype(int))
# Make predictions
#accuracy_score function nije banate hbe
yPred=clf.predict(xTest)
acc=accuracy_score(yTest.astype(int), yPred.astype(int))
print('Accuracy with 3 neighbours: ',acc)
#confusion matrix function nije banate hbe
def plot_conf_mat(lTrue, lPred, title):
""" A function for plotting the confusion matrix given true and predicted labels."""
cm = confusion_matrix(lTrue.astype(int), lPred.astype(int))
print(cm)
fig = plt.figure()
ax = fig.add_subplot(111)
cax = ax.matshow(cm)
plt.title(title)
plt.xlabel('Predicted')
plt.ylabel('True')
plt.show()
plot_conf_mat(yTest, yPred, 'K=3')
###Output
[[28 0 0]
[ 0 28 1]
[ 0 5 24]]
|
Movie_Recommendation_System (1).ipynb | ###Markdown
Movie Reccomendation system **Importing Libraries**
###Code
import pandas as pd
import numpy as np
from scipy.sparse import csr_matrix
from sklearn.neighbors import NearestNeighbors
import matplotlib.pyplot as plt
import seaborn as sns
movies = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Mini Project 6th Sem/movies.csv")
ratings = pd.read_csv("/content/drive/MyDrive/Colab Notebooks/Mini Project 6th Sem/ratings.csv")
ratings.head()
ratings.shape
movies.head()
movies.shape
final_dataset = ratings.pivot(index='movieId',columns='userId',values='rating')
final_dataset.head()
final_dataset.fillna(0,inplace=True)
final_dataset.head()
###Output
_____no_output_____
###Markdown
**Filtering The Dataset** **1. To qualify a movie, a minimum of 10 users should have voted a movie. 2.To qualify a user, a minimum of 50 movies should have voted by the user**
###Code
no_user_voted = ratings.groupby('movieId')['rating'].agg('count')
no_movies_voted = ratings.groupby('userId')['rating'].agg('count')
f,ax = plt.subplots(1,1,figsize=(16,4))
# ratings['rating'].plot(kind='hist')
plt.scatter(no_user_voted.index,no_user_voted,color='purple')
plt.axhline(y=10,color='b')
plt.xlabel('MovieId')
plt.ylabel('No. of users voted')
plt.show()
###Output
_____no_output_____
###Markdown
**Making the necessary modifications as per the threshold set.**
###Code
final_dataset = final_dataset.loc[no_user_voted[no_user_voted > 10].index,:]
###Output
_____no_output_____
###Markdown
**Let’s visualize the number of votes by each user with our threshold of 50.**
###Code
f,ax = plt.subplots(1,1,figsize=(16,4))
plt.scatter(no_movies_voted.index,no_movies_voted,color='blue')
plt.axhline(y=50,color='r')
plt.xlabel('UserId')
plt.ylabel('No. of votes by user')
plt.show()
###Output
_____no_output_____
###Markdown
**Making the necessary modifications as per the threshold set.**
###Code
final_dataset=final_dataset.loc[:,no_movies_voted[no_movies_voted > 50].index]
final_dataset
###Output
_____no_output_____
###Markdown
**Removing sparsity**
###Code
csr_data = csr_matrix(final_dataset.values)
final_dataset.reset_index(inplace=True)
###Output
_____no_output_____
###Markdown
**Making the movie recommendation system model** **We will be using the KNN algorithm to compute similarity with cosine distance metric which is very fast and more preferable than pearson coefficient.**
###Code
knn = NearestNeighbors(metric='cosine', algorithm='brute', n_neighbors=21, n_jobs=-1)
knn.fit(csr_data)
###Output
_____no_output_____
###Markdown
**Recommendation Function**
###Code
def get_movie_recommendation(movie_name):
n_movies_to_reccomend = 10
movie_list = movies[movies['title'].str.contains(movie_name)]
if len(movie_list):
movie_idx= movie_list.iloc[0]['movieId']
movie_idx = final_dataset[final_dataset['movieId'] == movie_idx].index[0]
distances , indices = knn.kneighbors(csr_data[movie_idx],n_neighbors=n_movies_to_reccomend+1)
rec_movie_indices = sorted(list(zip(indices.squeeze().tolist(),distances.squeeze().tolist())),key=lambda x: x[1])[:0:-1]
recommend_frame = []
for val in rec_movie_indices:
movie_idx = final_dataset.iloc[val[0]]['movieId']
idx = movies[movies['movieId'] == movie_idx].index
recommend_frame.append({'Title':movies.iloc[idx]['title'].values[0],'Distance':val[1]})
df = pd.DataFrame(recommend_frame,index=range(1,n_movies_to_reccomend+1))
return df
else:
return "Movie not found."
get_movie_recommendation('Avatar')
###Output
_____no_output_____
###Markdown
**Adding Dataset**
###Code
dataset_1 = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Mini Project 6th Sem/tmdb_5000_credits.csv')
d_2 = pd.read_csv('/content/drive/MyDrive/Colab Notebooks/Mini Project 6th Sem/tmdb_5000_movies.csv')
dataset_1.head()
dataset_1.columns = ['id','tittle','cast','crew']
# In the above line we are changing the column name of Movie_id to id so that we can merge
d_2 = d_2.merge(dataset_1,on='id')
d_2.head(2)
# Taking a copy of the file for cleaning the data
dataset_2 = d_2.copy()
dataset_2.shape
# fill Nan cells in columns with numerical values with the mean value.
dataset_2.budget = dataset_2.budget.replace(0,dataset_2.budget.mean())
dataset_2.popularity = dataset_2.popularity.replace('0', dataset_2.popularity.mean())
dataset_2.revenue = dataset_2.revenue.replace(0, dataset_2.revenue.mean())
dataset_2.runtime = dataset_2.runtime.replace('0', dataset_2.runtime.mean())
dataset_2.vote_average = dataset_2.vote_average.replace('0', dataset_2.vote_average.mean())
dataset_2.vote_count = dataset_2.vote_count.replace(0, dataset_2.vote_count.mean())
###Output
_____no_output_____
###Markdown
Popularity Based Recommendation System -
###Code
C= dataset_2['vote_average'].mean()
C
m= dataset_2['vote_count'].quantile(0.7)
m
#Creating qualified movies list
qualified_movies = dataset_2.copy().loc[dataset_2['vote_count'] >= m]
qualified_movies.shape
def weighted_rating(x, m=m, C=C):
v = x['vote_count']
R = x['vote_average']
# Calculation based on the IMDB formula
return (v/(v+m) * R) + (m/(m+v) * C)
# Define a new feature 'score' and calculate its value with `weighted_rating()`
qualified_movies['score'] = qualified_movies.apply(weighted_rating, axis=1)
#Sort movies based on score calculated above
qualified_movies = qualified_movies.sort_values('score', ascending=False)
#Print the top 10 movies
qualified_movies[['title', 'vote_count', 'vote_average', 'score']].head(10)
popular = dataset_2.sort_values('popularity', ascending=False)
import matplotlib.pyplot as plt
import seaborn as sns
plt.figure(figsize=(12,4))
plt.figure(figsize=(16,6))
ax = sns.barplot(x=popular['popularity'].head(10), y=popular['original_title'].head(10), data=popular, palette='deep')
plt.title('"Most Popular" Movies ', weight='bold')
plt.xlabel('Popularity Score', weight='bold')
plt.ylabel('Movie Title', weight='bold')
plt.savefig('popular_movies.png')
###Output
_____no_output_____
###Markdown
Content Based Filtering
###Code
#Import TfIdfVectorizer from scikit-learn
from sklearn.feature_extraction.text import TfidfVectorizer
#Define a TF-IDF Vectorizer Object. Remove all english stop words such as 'the', 'a'
tfidf = TfidfVectorizer(stop_words='english')
#Replace NaN with an empty string
dataset_2['overview'] = dataset_2['overview'].fillna('')
#Construct the required TF-IDF matrix by fitting and transforming the data
tfidf_matrix = tfidf.fit_transform(dataset_2['overview'])
#The shape of tfidf_matrix
tfidf_matrix.shape
###Output
_____no_output_____
###Markdown
Cosine Similarity **Using Sigmoid kernel**
###Code
# Import linear_kernel
from sklearn.metrics.pairwise import sigmoid_kernel
# Compute the cosine similarity matrix
cosine_sim = sigmoid_kernel(tfidf_matrix, tfidf_matrix)
#Construct a reverse map of indices and movie titles
indices = pd.Series(dataset_2.index, index=dataset_2['title']).drop_duplicates()
# Function that takes in movie title as input and outputs most similar movies
def get_recommendations(title, cosine_sim=cosine_sim):
# Get the index of the movie that matches the title
idx = indices[title]
# Get the pairwsie similarity scores of all movies with that movie
sim_scores = list(enumerate(cosine_sim[idx]))
# Sort the movies based on the similarity scores
sim_scores = sorted(sim_scores, key=lambda x: x[1], reverse=True)
# Get the scores of the 10 most similar movies
sim_scores = sim_scores[1:11]
# Get the movie indices
movie_indices = [i[0] for i in sim_scores]
# Return the top 10 most similar movies
return dataset_2['title'].iloc[movie_indices]
get_recommendations('The Matrix')
###Output
_____no_output_____ |
lijin-THU:notes-python/02-python-essentials/02.18-modules-and-packages.ipynb | ###Markdown
模块和包 模块 Python会将所有 `.py` 结尾的文件认定为Python代码文件,考虑下面的脚本 `ex1.py` :
###Code
%%writefile ex1.py
PI = 3.1416
def sum(lst):
tot = lst[0]
for value in lst[1:]:
tot = tot + value
return tot
w = [0, 1, 2, 3]
print sum(w), PI
###Output
Overwriting ex1.py
###Markdown
可以执行它:
###Code
%run ex1.py
###Output
6 3.1416
###Markdown
这个脚本可以当作一个模块,可以使用`import`关键词加载并执行它(这里要求`ex1.py`在当前工作目录):
###Code
import ex1
ex1
###Output
_____no_output_____
###Markdown
在导入时,**Python**会执行一遍模块中的所有内容。`ex1.py` 中所有的变量都被载入了当前环境中,不过要使用 ex1.变量名的方法来查看或者修改这些变量:
###Code
print ex1.PI
ex1.PI = 3.141592653
print ex1.PI
###Output
3.141592653
###Markdown
还可以用 ex1.函数名调用模块里面的函数:
###Code
print ex1.sum([2, 3, 4])
###Output
9
###Markdown
为了提高效率,**Python**只会载入模块一次,已经载入的模块再次载入时,Python并不会真正执行载入操作,哪怕模块的内容已经改变。例如,这里重新导入 `ex1` 时,并不会执行 `ex1.py` 中的 `print` 语句:
###Code
import ex1
###Output
_____no_output_____
###Markdown
需要重新导入模块时,可以使用`reload`强制重新载入它,例如:
###Code
reload(ex1)
###Output
6 3.1416
###Markdown
删除之前生成的文件:
###Code
import os
os.remove('ex1.py')
###Output
_____no_output_____
###Markdown
`__name__` 属性 有时候我们想将一个 `.py` 文件既当作脚本,又能当作模块用,这个时候可以使用 `__name__` 这个属性。只有当文件被当作脚本执行的时候, `__name__`的值才会是 `'__main__'`,所以我们可以:
###Code
%%writefile ex2.py
PI = 3.1416
def sum(lst):
""" Sum the values in a list
"""
tot = 0
for value in lst:
tot = tot + value
return tot
def add(x, y):
" Add two values."
a = x + y
return a
def test():
w = [0,1,2,3]
assert(sum(w) == 6)
print 'test passed.'
if __name__ == '__main__':
test()
###Output
Writing ex2.py
###Markdown
运行文件:
###Code
%run ex2.py
###Output
test passed.
###Markdown
当作模块导入, `test()` 不会执行:
###Code
import ex2
###Output
_____no_output_____
###Markdown
但是可以使用其中的变量:
###Code
ex2.PI
###Output
_____no_output_____
###Markdown
使用别名:
###Code
import ex2 as e2
e2.PI
###Output
_____no_output_____
###Markdown
其他导入方法 可以从模块中导入变量:
###Code
from ex2 import add, PI
###Output
_____no_output_____
###Markdown
使用 `from` 后,可以直接使用 `add` , `PI`:
###Code
add(2, 3)
###Output
_____no_output_____
###Markdown
或者使用 `*` 导入所有变量:
###Code
from ex2 import *
add(3, 4.5)
###Output
_____no_output_____
###Markdown
这种导入方法不是很提倡,因为如果你不确定导入的都有哪些,可能覆盖一些已有的函数。删除文件:
###Code
import os
os.remove('ex2.py')
###Output
_____no_output_____ |
Python Data Science Toolbox -Part 1/Writing your own functions/06.Functions that return multiple values.ipynb | ###Markdown
In the previous exercise, you constructed tuples, assigned tuples to variables, and unpacked tuples. Here you will return multiple values from a function using tuples. Let's now update our shout() function to return multiple values. Instead of returning just one string, we will return two strings with the string !!! concatenated to each.Note that the return statement return x, y has the same result as return (x, y): the former actually packs x and y into a tuple under the hood! Modify the function header such that the function name is now shout_all, and it accepts two parameters, word1 and word2, in that order. Concatenate the string '!!!' to each of word1 and word2 and assign to shout1 and shout2, respectively. Construct a tuple shout_words, composed of shout1 and shout2. Call shout_all() with the strings 'congratulations' and 'you' and assign the result to yell1 and yell2 (remember, shout_all() returns 2 variables!).
###Code
# Define shout_all with parameters word1 and word2
def shout_all(word1, word2):
# Concatenate word1 with '!!!': shout1
shout1 = word1+ "!!!"
# Concatenate word2 with '!!!': shout2
shout2 = word2+ "!!!"
# Construct a tuple with shout1 and shout2: shout_words
shout_words= (shout1, shout2)
# Return shout_words
return shout_words
# Pass 'congratulations' and 'you' to shout_all(): yell1, yell2
yell1, yell2 = shout_all('congratulations', 'you')
# Print yell1 and yell2
print(yell1)
print(yell2)
###Output
congratulations!!!
you!!!
|
nbs/dl2/12b_lm_pretrain.ipynb | ###Markdown
Pretraining on WT103
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_12a import *
###Output
_____no_output_____
###Markdown
Data One time download [Jump_to lesson 12 video](https://course19.fast.ai/videos/?lesson=12&t=7410)
###Code
#path = datasets.Config().data_path()
#version = '103' #2
#! wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-{version}-v1.zip -P {path}
#! unzip -q -n {path}/wikitext-{version}-v1.zip -d {path}
#! mv {path}/wikitext-{version}/wiki.train.tokens {path}/wikitext-{version}/train.txt
#! mv {path}/wikitext-{version}/wiki.valid.tokens {path}/wikitext-{version}/valid.txt
#! mv {path}/wikitext-{version}/wiki.test.tokens {path}/wikitext-{version}/test.txt
###Output
_____no_output_____
###Markdown
Split the articles: WT103 is given as one big text file and we need to chunk it in different articles if we want to be able to shuffle them at the beginning of each epoch.
###Code
path = datasets.Config().data_path()/'wikitext-103'
def istitle(line):
return len(re.findall(r'^ = [^=]* = $', line)) != 0
def read_wiki(filename):
articles = []
with open(filename, encoding='utf8') as f:
lines = f.readlines()
current_article = ''
for i,line in enumerate(lines):
current_article += line
if i < len(lines)-2 and lines[i+1] == ' \n' and istitle(lines[i+2]):
current_article = current_article.replace('<unk>', UNK)
articles.append(current_article)
current_article = ''
current_article = current_article.replace('<unk>', UNK)
articles.append(current_article)
return articles
train = TextList(read_wiki(path/'train.txt'), path=path) #+read_file(path/'test.txt')
valid = TextList(read_wiki(path/'valid.txt'), path=path)
len(train), len(valid)
sd = SplitData(train, valid)
proc_tok,proc_num = TokenizeProcessor(),NumericalizeProcessor()
ll = label_by_func(sd, lambda x: 0, proc_x = [proc_tok,proc_num])
pickle.dump(ll, open(path/'ld.pkl', 'wb'))
ll = pickle.load( open(path/'ld.pkl', 'rb'))
bs,bptt = 128,70
data = lm_databunchify(ll, bs, bptt)
vocab = ll.train.proc_x[-1].vocab
len(vocab)
###Output
_____no_output_____
###Markdown
Model
###Code
dps = np.array([0.1, 0.15, 0.25, 0.02, 0.2]) * 0.2
tok_pad = vocab.index(PAD)
emb_sz, nh, nl = 300, 300, 2
model = get_language_model(len(vocab), emb_sz, nh, nl, tok_pad, *dps)
cbs = [partial(AvgStatsCallback,accuracy_flat),
CudaCallback, Recorder,
partial(GradientClipping, clip=0.1),
partial(RNNTrainer, α=2., β=1.),
ProgressCallback]
learn = Learner(model, data, cross_entropy_flat, lr=5e-3, cb_funcs=cbs, opt_func=adam_opt())
lr = 5e-3
sched_lr = combine_scheds([0.3,0.7], cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds([0.3,0.7], cos_1cycle_anneal(0.8, 0.7, 0.8))
cbsched = [ParamScheduler('lr', sched_lr), ParamScheduler('mom', sched_mom)]
learn.fit(10, cbs=cbsched)
torch.save(learn.model.state_dict(), path/'pretrained.pth')
pickle.dump(vocab, open(path/'vocab.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Pretraining on WT103
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_12a import *
###Output
_____no_output_____
###Markdown
Data One time download
###Code
#path = datasets.Config().data_path()
#version = '103' #2
#! wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-{version}-v1.zip -P {path}
#! unzip -q -n {path}/wikitext-{version}-v1.zip -d {path}
#! mv {path}/wikitext-{version}/wiki.train.tokens {path}/wikitext-{version}/train.txt
#! mv {path}/wikitext-{version}/wiki.valid.tokens {path}/wikitext-{version}/valid.txt
#! mv {path}/wikitext-{version}/wiki.test.tokens {path}/wikitext-{version}/test.txt
###Output
_____no_output_____
###Markdown
Split the articles: WT103 is given as one big text file and we need to chunk it in different articles if we want to be able to shuffle them at the beginning of each epoch.
###Code
path = datasets.Config().data_path()/'wikitext-103'
def istitle(line):
return len(re.findall(r'^ = [^=]* = $', line)) != 0
def read_wiki(filename):
articles = []
with open(filename, encoding='utf8') as f:
lines = f.readlines()
current_article = ''
for i,line in enumerate(lines):
current_article += line
if i < len(lines)-2 and lines[i+1] == ' \n' and istitle(lines[i+2]):
current_article = current_article.replace('<unk>', UNK)
articles.append(current_article)
current_article = ''
current_article = current_article.replace('<unk>', UNK)
articles.append(current_article)
return articles
train = TextList(read_wiki(path/'train.txt'), path=path) #+read_file(path/'test.txt')
valid = TextList(read_wiki(path/'valid.txt'), path=path)
len(train), len(valid)
sd = SplitData(train, valid)
proc_tok,proc_num = TokenizeProcessor(),NumericalizeProcessor()
ll = label_by_func(sd, lambda x: 0, proc_x = [proc_tok,proc_num])
pickle.dump(ll, open(path/'ld.pkl', 'wb'))
ll = pickle.load( open(path/'ld.pkl', 'rb'))
bs,bptt = 128,70
data = lm_databunchify(ll, bs, bptt)
vocab = ll.train.proc_x[-1].vocab
len(vocab)
###Output
_____no_output_____
###Markdown
Model
###Code
dps = np.array([0.1, 0.15, 0.25, 0.02, 0.2]) * 0.2
tok_pad = vocab.index(PAD)
emb_sz, nh, nl = 300, 300, 2
model = get_language_model(len(vocab), emb_sz, nh, nl, tok_pad, *dps)
cbs = [partial(AvgStatsCallback,accuracy_flat),
CudaCallback, Recorder,
partial(GradientClipping, clip=0.1),
partial(RNNTrainer, α=2., β=1.),
ProgressCallback]
learn = Learner(model, data, cross_entropy_flat, lr=5e-3, cb_funcs=cbs, opt_func=adam_opt())
lr = 5e-3
sched_lr = combine_scheds([0.3,0.7], cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds([0.3,0.7], cos_1cycle_anneal(0.8, 0.7, 0.8))
cbsched = [ParamScheduler('lr', sched_lr), ParamScheduler('mom', sched_mom)]
learn.fit(10, cbs=cbsched)
torch.save(learn.model.state_dict(), path/'pretrained.pth')
pickle.dump(vocab, open(path/'vocab.pkl', 'wb'))
###Output
_____no_output_____
###Markdown
Pretraining on WT103
###Code
%load_ext autoreload
%autoreload 2
%matplotlib inline
#export
from exp.nb_12a import *
###Output
_____no_output_____
###Markdown
Data One time download [Jump_to lesson 12 video](https://course.fast.ai/videos/?lesson=12&t=7410)
###Code
#path = datasets.Config().data_path()
#version = '103' #2
#! wget https://s3.amazonaws.com/research.metamind.io/wikitext/wikitext-{version}-v1.zip -P {path}
#! unzip -q -n {path}/wikitext-{version}-v1.zip -d {path}
#! mv {path}/wikitext-{version}/wiki.train.tokens {path}/wikitext-{version}/train.txt
#! mv {path}/wikitext-{version}/wiki.valid.tokens {path}/wikitext-{version}/valid.txt
#! mv {path}/wikitext-{version}/wiki.test.tokens {path}/wikitext-{version}/test.txt
###Output
_____no_output_____
###Markdown
Split the articles: WT103 is given as one big text file and we need to chunk it in different articles if we want to be able to shuffle them at the beginning of each epoch.
###Code
path = datasets.Config().data_path()/'wikitext-103'
def istitle(line):
return len(re.findall(r'^ = [^=]* = $', line)) != 0
def read_wiki(filename):
articles = []
with open(filename, encoding='utf8') as f:
lines = f.readlines()
current_article = ''
for i,line in enumerate(lines):
current_article += line
if i < len(lines)-2 and lines[i+1] == ' \n' and istitle(lines[i+2]):
current_article = current_article.replace('<unk>', UNK)
articles.append(current_article)
current_article = ''
current_article = current_article.replace('<unk>', UNK)
articles.append(current_article)
return articles
train = TextList(read_wiki(path/'train.txt'), path=path) #+read_file(path/'test.txt')
valid = TextList(read_wiki(path/'valid.txt'), path=path)
len(train), len(valid)
sd = SplitData(train, valid)
proc_tok,proc_num = TokenizeProcessor(),NumericalizeProcessor()
ll = label_by_func(sd, lambda x: 0, proc_x = [proc_tok,proc_num])
pickle.dump(ll, open(path/'ld.pkl', 'wb'))
ll = pickle.load( open(path/'ld.pkl', 'rb'))
bs,bptt = 128,70
data = lm_databunchify(ll, bs, bptt)
vocab = ll.train.proc_x[-1].vocab
len(vocab)
###Output
_____no_output_____
###Markdown
Model
###Code
dps = np.array([0.1, 0.15, 0.25, 0.02, 0.2]) * 0.2
tok_pad = vocab.index(PAD)
emb_sz, nh, nl = 300, 300, 2
model = get_language_model(len(vocab), emb_sz, nh, nl, tok_pad, *dps)
cbs = [partial(AvgStatsCallback,accuracy_flat),
CudaCallback, Recorder,
partial(GradientClipping, clip=0.1),
partial(RNNTrainer, α=2., β=1.),
ProgressCallback]
learn = Learner(model, data, cross_entropy_flat, lr=5e-3, cb_funcs=cbs, opt_func=adam_opt())
lr = 5e-3
sched_lr = combine_scheds([0.3,0.7], cos_1cycle_anneal(lr/10., lr, lr/1e5))
sched_mom = combine_scheds([0.3,0.7], cos_1cycle_anneal(0.8, 0.7, 0.8))
cbsched = [ParamScheduler('lr', sched_lr), ParamScheduler('mom', sched_mom)]
learn.fit(10, cbs=cbsched)
torch.save(learn.model.state_dict(), path/'pretrained.pth')
pickle.dump(vocab, open(path/'vocab.pkl', 'wb'))
###Output
_____no_output_____ |
examples/statebuilder_examples.ipynb | ###Markdown
--- 1. The BasicsThe Statebuilder is a submodule that helps produce custom Neuroglancer states based on data in the form of Pandas dataframes. Neuroglancer organizes data sources into layers. Layers come in three types, image, segmentation, and annotation. Each has different properties and functions. The state builder lets the user define a set of rules for initializing layers and how to map data columns to selections and annotations. Let's see the simplest example in use now.
###Code
from nglui.statebuilder import *
###Output
_____no_output_____
###Markdown
Image LayersImage layers in Neuroglancer are pretty simple. An image has a name (by default 'img' here) and a source, which is typically a path to a cloud hosted 3d image volume. We define an image layer with an ImageLayerConfig that lets the user set these key parameters. We are going to use the public Layer 2/3 EM dataset at [Microns Explorer](https://layer23.microns-explorer.org) as an example.
###Code
img_source = 'precomputed://gs://neuroglancer/pinky100_v0/son_of_alignment_v15_rechunked'
img_layer = ImageLayerConfig(name='layer23',
source=img_source,
)
###Output
_____no_output_____
###Markdown
Segmentation LayersSegmentation layers in Neuroglancer are more complex, since each object has a unique id and can be selected, loading the object and its mesh into the neuroglancer state. Like images, a segmentation layer has a name and a source. Neuroglancer supports two types of segmentation volume, 'precomputed' and 'graphene'. Precomputed is what we call a "flat" segmentation, in that the segmentation is frozen into an unchanging state. Flat segmentations are fast, but cannot be edited to fix mistakes. Graphene segmentations are dynamic, using an efficient graph data representation to allow edits to the segmentation state. For the most part, users should not need to care about the difference.Segmentation layers are configured by a SegmentationLayerConfig. At a minimum, a SegmentionLayerConfig has the same setup as an ImageLayerConfig.
###Code
seg_source = 'precomputed://gs://microns_public_datasets/pinky100_v185/seg'
seg_layer = SegmentationLayerConfig(name = 'seg',
source = seg_source)
###Output
_____no_output_____
###Markdown
Annotation LayersAnnotation layers let a user define various spatial data annotations. We can define annotation layers with AnnotationLayerConfig objects. While annotation layers have a name, they don't have a source. We'll discuss how to map data to annotations later, but for now we will add a blank annotation layer.
###Code
anno_layer = AnnotationLayerConfig(name='annos')
###Output
_____no_output_____
###Markdown
StateBuilders and state renderingAt it's most basic, the StateBuilder is initialized with layer configs for each layer the user wants. A state isn't actually generated until the user calls `render_state`. While the function can also take a dataframe, it works with no argument to generate a default state. The default output is a url string that specifies the neuroglancer state, however other options are available with the optional `return_as` parameter.
###Code
sb = StateBuilder(layers=[img_layer, seg_layer, anno_layer])
sb.render_state()
###Output
_____no_output_____
###Markdown
Using `return_as="html"` provides a link, useful for interactive notebooks.
###Code
sb.render_state(return_as='html')
###Output
_____no_output_____
###Markdown
Using `sb.render_state(return_as='json')` returns the JSON state as a string. This can be pasted directly into the neuroglancer JSON state.
###Code
sb.render_state(return_as='json')
###Output
_____no_output_____
###Markdown
Using `sb.render_state(return_as='dict')` returns the state as a dictionary, useful for inspection and debugging.
###Code
sb.render_state(return_as='dict')
###Output
_____no_output_____
###Markdown
Finally, using `sb.render_state(return_as='viewer')` returns an EasyViewer object, which can be further manipulated (see documentation).
###Code
vwr = sb.render_state(return_as='viewer')
vwr.state
###Output
_____no_output_____
###Markdown
2. Data-responsive state generationThe key feature of the StateBuilder is to use data to add selected objects or annotations to a Neuroglancer state. Each layer has rules for how to map dataframe columns to its state. This is designed to be useful for consistent mapping of queries or analysis directly into data exploration. Selected segmentationsSegmentation layers principally control what what objects are selected. Objects are specified by root id and `SegmentationLayerConfig` can hold rules for how to select data.The `soma_df` example dataframe describes all excitatory neurons in the layer23 dataset. Each row is a different cell and it is described by a number of columns, of which 'pt_root_id' has the root id for each excitatory neuron.
###Code
import pandas as pd
soma_df = pd.read_hdf('example_data.h5', 'soma')
soma_df.head()
###Output
_____no_output_____
###Markdown
One common task is to all the ids in a column of the dataframe, perhaps as a result of a query or some bit of analysis. We can tell the `SegmentationLayerConfig` which layer (or list of layers) to use with the `selected_ids_column` argument.
###Code
seg_layer = SegmentationLayerConfig(name = 'seg',
source = seg_source,
selected_ids_column='pt_root_id')
sb = StateBuilder(layers=[img_layer, seg_layer])
###Output
_____no_output_____
###Markdown
Now our statebuilder object also needs data. The `render_data` method can always take a dataframe. To avoid selecting hundreds of neurons, let's just take the first five rows of the dataframe.
###Code
sb.render_state(soma_df.head(), return_as='html')
###Output
_____no_output_____
###Markdown
For some purposes, it can also be useful to force certain objects to be selected no matter the data, for example if you want to show various points in space relative to a specific neuron. For that, we can use the `fixed_ids` argument. Here, we use it to load neuron `648518346349537042` no matter the data. Both data-driven and fixed ids can be used together.
###Code
seg_layer = SegmentationLayerConfig(name = 'seg',
source = seg_source,
selected_ids_column='pt_root_id',
fixed_ids=[648518346349537042])
sb = StateBuilder(layers=[img_layer, seg_layer, anno_layer])
###Output
_____no_output_____
###Markdown
Compare the output with and without data:
###Code
sb.render_state(return_as='html', link_text='State without data')
sb.render_state(soma_df.head(2), return_as='html', link_text='State with selection data')
###Output
_____no_output_____
###Markdown
Assigning colors to cellsYou can also provide a column that species the color for each root id with the `color_column` argument. Colors in Neuroglancer are specified as an RGB hex string, for example a pure red would be `ff0000`.In the example below, we specify the color list directly. Data visualization packages typically have methods to convert numeric RGB vectors to hex, for instance `matplotlib.colors.to_hex` for `matplotlib`.
###Code
# First we have to add a column with legitimate colors
reds = ['#fdd4c2', '#fca082', '#fb694a', '#e32f27', '#b11218']
soma_colored_df = soma_df.head(5).copy()
soma_colored_df['color'] = reds
# Next we specify the color column when defining the segmentation layer.
seg_layer = SegmentationLayerConfig(name = 'seg', source=seg_source, selected_ids_column='pt_root_id', color_column='color')
sb = StateBuilder(layers=[img_layer, seg_layer])
sb.render_state(soma_colored_df, return_as='html', link_text='State with color')
###Output
_____no_output_____
###Markdown
Data-driven annotationsNeuroglancer offers Annotation layers, which can put different kinds of markers in the volume. There are three main marker types:* Points* Lines* SpheresEach annotation layer can hold any collection of types, however colors and other organizational properties are shared among all annotations in one layer. To make a new annotation layer, we use an `AnnotationLayerConfig`. Unlike segmentation and image layers, there is no data source, but the data mapping options are more rich. Each annotation type has a mapper class (PointMapper, LineMapper, SphereMapper) to designate the rules. Each AnnotationLayerConfig works for a single annotation layer, but can take an arbitrary number of mappers. Point annotationsPoint annotations are a simple point in 3d space. Let's use the positions from the soma_df above. The only thing you need to set a point mapper is the column name, which should reference a column of 3-element locations in units of voxels.
###Code
points = PointMapper(point_column='pt_position')
anno_layer = AnnotationLayerConfig(name='annos',
mapping_rules=points )
# Make a basic segmentation source
seg_layer = SegmentationLayerConfig(seg_source)
sb = StateBuilder([img_layer, seg_layer, anno_layer])
sb.render_state(soma_df, return_as='html')
###Output
_____no_output_____
###Markdown
Line annotationsLine differ from point annotations only by requiring two columns, not just one. The two columns set the beginning and end points of the line and are thus both columns should contain three element points. Let's use the example synapse dataframe as an example. We're going to use the `pre_pt_root_id` field to select the neuron ids to show and use the lines to indicate their outgoing synapses, which go from `ct_pt_position`, a point on the synaptic cleft itself, to `post_pt_position`, a point somewhat inside the target neuron.
###Code
lines = LineMapper(point_column_a='ctr_pt_position', point_column_b='post_pt_position')
anno_layer = AnnotationLayerConfig(name='synapses',
mapping_rules=lines)
# Make a segmentation source with a selected ids column
seg_layer = SegmentationLayerConfig(seg_source, selected_ids_column='pre_pt_root_id')
sb = StateBuilder([img_layer, seg_layer, anno_layer])
sb.render_state(pre_syn_df, return_as='html')
###Output
_____no_output_____
###Markdown
Sphere annotationsLike line annotations, sphere annotations take two columns. The first is the center point (and thus a point in space) while the second is the radius in voxels (x/y only), and thus numeric. We're going to use the soma positions for the example, using the `pt_position` for the centers. Since we don't have radius data in the frame, we will add a column with a random radius.
###Code
import numpy as np
soma_df['radius'] = np.random.normal(1500, 250, len(soma_df)).astype(int)
spheres = SphereMapper(center_column='pt_position', radius_column='radius')
anno_layer = AnnotationLayerConfig(name='soma', mapping_rules=spheres)
# Make a basic segmentation layer
seg_layer = SegmentationLayerConfig(seg_source)
sb = StateBuilder([img_layer, seg_layer, anno_layer])
sb.render_state(soma_df, return_as='html')
###Output
_____no_output_____
###Markdown
Enriching annotationsAnnotations can be enriched with metadata. There are three main types:1. Descriptions — Each annotation has a free text field.2. Linked Segmentations — Each annotation can have one or more linked segmentation ids. These can be made to automatically load on annotation selection.3. Tags — Neuroglancer states can have discrete tags that are useful for quickly categorizing annotations. Each annotation layer has a list of tags, and each annotation can have any number of tags. DescriptionsDescriptions need a simple free text field (or None). Any annotation mapper can take a description column with strings that is then displayed with each annotation in Neuroglancer. After running the following code, right click on the annotation layer and you can see that each of the point annotions in the list has an 'e' or 'i' letter underneath it.
###Code
points = PointMapper(point_column='pt_position', description_column='cell_type')
anno_layer = AnnotationLayerConfig(name='annos',
mapping_rules=points)
# Basic segmentation layer
seg_layer = SegmentationLayerConfig(seg_source)
sb = StateBuilder([img_layer, seg_layer, anno_layer])
sb.render_state(soma_df.head(), return_as='html')
###Output
_____no_output_____
###Markdown
Linked SegmentationsAn annotation can also be linked to an underlying segmentation, for example a synapse can be linked to its neurons. On the Neuroglancer side, the annotation layer has to know the name of the segmentation layer to use, while the annotation needs to know what root id or ids to look up. To make data-driven annotations with linked segmentations, we both1. Add a linked segmentation column name to the annotation Mapper class that will be one or more column names in the dataframe2. Pass a segmentation layer name to the AnnotationLayerConfig. This defaults to `None`. Note that while the default segmentation layer name is `seg`, no segmentation layer name is set by default. Thus, if you plan to use a segmentation layer that was not given an explicit name, use `seg` as the argument here.
###Code
points = PointMapper('pt_position', linked_segmentation_column='pt_root_id')
anno_layer = AnnotationLayerConfig(mapping_rules=points, linked_segmentation_layer='seg')
# Basic segmentation layer
seg_layer = SegmentationLayerConfig(seg_source)
sb = StateBuilder([img_layer, seg_layer, anno_layer])
sb.render_state(soma_df.head(), return_as='html')
###Output
_____no_output_____
###Markdown
Tags Tags are categorical labels on annotations. Each annotation layer can have a defined set of up to ten tags (seen under the "shortcuts" tab). Each annotation can have any number of tags, toggled on and off with the key command from the shortcut tab. To pre-assign tags to annotations based on the data, you can assign a `tag_column`. For each row in the data, if the element in the tag column is in the annotation layer's tag list, it will assign it to the resulting annotation. Elements of the tag column can also be collections of values, if you want multiple tags assigned. Note that any values that are not in the layer's tag list are ignored.As before, there are two steps:1. If you want to pre-assign tags, add a `tag_column` argument to the annotation Mapper. This isn't needed if you just want to set tags for the layer.2. Pass a list of tags to the AnnotationLayerConfig. The order of the list matters for determining the exact shortcuts that are used for each tag; the first tag has the shortcut `shift-q`, the second tag `shift-w`, etc.
###Code
points = PointMapper('pt_position', tag_column='cell_type')
anno_layer = AnnotationLayerConfig(mapping_rules=points, tags=['e'])
sb = StateBuilder([img_layer, seg_layer, anno_layer])
sb.render_state(soma_df, return_as='html')
###Output
_____no_output_____
###Markdown
--- 3. Setting the ViewThe StateBuilder offers some control over the initial position of the view and how data is visualized, while offering some fairly sensible defaults that look okay in most situations. Position, layout, and zoom optionsView options that do not affect individual layers can be set with a dict passed to the `view_kws` argument in StateBuilder, which are passed to `viewer.set_view_options`.* *show_slices* : Boolean, sets if slices are shown in the 3d view. Defaults to False.* *layout* : `xy-3d`/`xz-3d`/`yz-3d` (sections plus 3d pane), `xy`/`yz`/`xz`/`3d` (only one pane), or `4panel` (all panes). Default is `xy-3d`.* *show_axis_lines* : Boolean, determines if the axis lines are shown in the middle of each view.* *show_scale_bar* : Boolean, toggles showing the scale bar.* *orthographic* : Boolean, toggles orthographic view in the 3d pane.* *position* : 3-element vector, determines the centered location.* *zoom_image* : Zoom level for the imagery in units of nm per voxel. Defaults to 8.* *zoom_3d* : Zoom level for the 3d pane. Defaults to 2000. Smaller numbers are more zoomed in.Here's an example of setting some of these rules. Note that only providing default values for some parameters does not override the default values of others.
###Code
view_options = {'layout': '4panel',
'show_slices': True,
'zoom_3d': 500,
'position': [71832, 54120, 1089]}
seg_layer = SegmentationLayerConfig(seg_source)
sb = StateBuilder([img_layer, seg_layer], view_kws=view_options)
sb.render_state(return_as='html')
###Output
_____no_output_____
###Markdown
Data-driven centeringIt can also be convenient to center the user on an annotation by default.Because this is tied to a specific spatial point column, this option is associated with a particular AnnotationMapper.Data-driven view centering takes precidence over the global view set in `view_kws`.For any of the mappers, setting `set_position` to `True` will center the view on the first annotation in the list.If multiple Mapper objects have `set_position=True`, then the view will follow the end-most one in the end-most annotation layer.
###Code
points = PointMapper('pt_position', set_position=True)
anno_layer = AnnotationLayerConfig(mapping_rules=points)
sb = StateBuilder([img_layer, seg_layer, anno_layer])
sb.render_state(soma_df, return_as='html')
###Output
_____no_output_____
###Markdown
Segmentation transparency optionsEach segmentation layer can control the transparency of selected, unselected, and 3d meshes. As with the global view, these can be passed as keyword arguments to `view_kws` in the SegmentationLayerConfig.* *alpha_selected* : Transparency (0–1) of selected segmentations in the imagery pane. Defaults to 0.3. * *alpha_3d* : Transparency (0–1) of selected meshes in the 3d view. Defaults to 1.* *alpha_unselected* : Transparency (0–1) of unselected segmentations in the imagery pane. Defaults to 0.
###Code
segmentation_view_options = {'alpha_selected': 0.6,
'alpha_3d': 0.2}
seg_layer = SegmentationLayerConfig(seg_source, fixed_ids=[648518346349539896], view_kws=segmentation_view_options)
sb = StateBuilder([img_layer, seg_layer])
sb.render_state(return_as='html')
###Output
_____no_output_____
###Markdown
--- 4. Chaining StateBuilders to apply multiple rulesOne limitation of a StateBuilder object is that it applies the one rule to one dataframe. However, one might want to apply multiple rules in succession, for instance viewing both pre- and post-synaptic synapses for a neuron using two dataframes. Because the end result of a StateBuilder is just a Neuroglancer state, we can chain StateBuilders together in a row using the state of the previous as a base state for the next.To make this even easier, there is a ChainedStateBuilder object to help. It takes an ordered collection of StateBuilders and has its own `render_state` method. The key difference of the chained `render_state` is that it takes an ordered collection of dataframes, applying the nth dataframe to the nth StateBuilder.Here, we're going to make a ChainedStateBuilder that will display presynaptic and postsynaptic points for a single neuron, which live in different dataframes. The first statebuilder will handle both setting up the imagery and segmentation as well as rendering the postsynaptic point annotations, while the second one only has to render the presynaptic points, since all the other layers will be set up.
###Code
#First state builder
seg_layer = SegmentationLayerConfig(seg_source, selected_ids_column='post_pt_root_id')
postsyn_mapper = LineMapper(point_column_a='pre_pt_position', point_column_b='ctr_pt_position')
postsyn_annos = AnnotationLayerConfig('post', color='#00CCCC', mapping_rules=postsyn_mapper)
postsyn_sb = StateBuilder(layers=[img_layer, seg_layer, postsyn_annos])
#Second state builder
presyn_mapper = LineMapper(point_column_a='ctr_pt_position', point_column_b='post_pt_position')
presyn_annos = AnnotationLayerConfig('pre', color='#CC1111', mapping_rules=presyn_mapper)
presyn_sb = StateBuilder(layers=[presyn_annos])
# Chained state builder
chained_sb = ChainedStateBuilder([postsyn_sb, presyn_sb])
chained_sb.render_state([post_syn_df, pre_syn_df], return_as='html')
###Output
_____no_output_____ |
notebooks/Intertidal mapping/Intertidal Mapping.ipynb | ###Markdown
Intertidal Mapping * Jupyter Notebook running on Cloud9* Using Google Earth Engine for the image analysis* Author: Andrew Cottam, Joint Research Centre* Date: July 2017 Table of Contents CodeInitialisationConstantsStudy areasGlobal functionsSentinel 1 Synthetic Aperture Radar (SAR)Sentinel 2 Multi-Spectral Instrument (MSI)BandsIntertidal ExamplesFactors affecting spectral propertiesWater depthSolar illuminationCloudsWater Vapor band (B9)Cirrus band (B10)QA band (QA60)Cloud shadowHabitat typesHabitat conditionTurbidityClassificationUnsupervisedSupervised Code Initialisation
###Code
from IPython.display import Image, display, HTML
import ee
ee.Initialize()
###Output
_____no_output_____
###Markdown
Constants
###Code
SENTINEL1 = ee.ImageCollection('COPERNICUS/S1_GRD')
SENTINEL2 = ee.ImageCollection('COPERNICUS/S2')
GSW = ee.Image("JRC/GSW1_0/GlobalSurfaceWater")
BATHYMETRY = ee.Image("NOAA/NGDC/ETOPO1")
SCALING_FACTOR = 5000
MONTHS = ["Jan","Feb","Mar","Apr","May","Jun","Jul","Aug","Sep","Oct","Nov","Dec"]
BANDS = ["B1","B2","B3","B4","B5","B6","B7","B8","B8A","B9","B10","B11","B12"]
UNSUPERVISED_BANDS = ["B2","B3","B4","B5","B6","B7","B8","B8A","B11","B12"] # we dont want to include all bands in the unsupervised classification
BAND_NAMES = ["Coastal aerosol","Blue","Green","Red","Vegetation Red Edge 1","Vegetation Red Edge 2","Vegetation Red Edge 3","NIR","Narrow NIR","Water vapour","SWIR Cirrus","SWIR 1","SWIR 2"]
BAND_RESOLUTIONS = [60,10,10,10,20,20,20,10,20,60,60,20,20]
###Output
_____no_output_____
###Markdown
Study areas
###Code
# scale is the distance across the image in Kilometers
AREA_CHALK_SOUND = {'centroid': ee.Geometry.Point(-72.29, 21.77), 'scale': 10}
AREA_TURKS_AND_CAICOS = {'centroid': ee.Geometry.Point(-72.1, 21.7), 'scale': 60}
BOTTLE_CREEK = {'centroid': ee.Geometry.Point(-71.8923, 21.88444), 'scale': 10}
BIG_POND = {'centroid': ee.Geometry.Point(-71.70021, 21.7595), 'scale': 20}
BIG_POND2 = {'centroid': ee.Geometry.Point(-71.70021, 21.7595), 'scale': 5}
###Output
_____no_output_____
###Markdown
Global functions
###Code
#returns an extent object using the passed centroid coordinate and the desired width in kilometers
def getExtent(centroid, width):
width = ee.Number(width).multiply(1000)
scale = centroid.projection().nominalScale()
widthLL = ee.Number(width).divide(scale).divide(2)
heightLL = ee.Number(width).divide(scale).divide(2)
llx = ee.Number(centroid.coordinates().get(0)).subtract(widthLL)
lly = ee.Number(centroid.coordinates().get(1)).subtract(heightLL)
urx = ee.Number(centroid.coordinates().get(0)).add(widthLL)
ury = ee.Number(centroid.coordinates().get(1)).add(heightLL)
return ee.Geometry.Rectangle([llx,lly,urx,ury], None, False)
# converts the image to hsv
def getHsv(image):
return image.select(["B4", "B3", "B2"]).divide(SCALING_FACTOR).rgbToHsv()
# gets the scenes for a specific area for a specific month for sentinel 2
def getMonthImage(month):
#filter the data by month and study area
monthIndex = ee.Number(ee.List(MONTHS).indexOf(month)).add(1);
scenes = SENTINEL2.filterBounds(study_area).filter(ee.Filter.calendarRange(monthIndex,monthIndex,"month")).sort("CLOUDY_PIXEL_PERCENTAGE", False) # filter for the study area and the month and order with the least cloudy images on top
#mosaic the data
mosaic = scenes.mosaic()
# mosaic = scenes.reduce(ee.Reducer.percentile([30])).rename(["B1","B2","B3","B4","B5","B6","B7","B8","B8A","B9","B10","B11","B12","QA10","QA20","QA60"])
# mosaic = scenes.reduce(ee.Reducer.mean()).rename(["B1","B2","B3","B4","B5","B6","B7","B8","B8A","B9","B10","B11","B12","QA10","QA20","QA60"])
# mosaic = ee.Algorithms.If(ee.String(_bands).index("hue").eq(0).Or(ee.String(_bands).index("saturation").eq(0)).Or(ee.String(_bands).index("value").eq(0)), getHsv(mosaic), mosaic)
return ee.Image(mosaic).copyProperties(ee.Image(scenes.first())) #copy the metadata properties of the first (arbritrary) image to the mosaic
# gets the clouds from the sentinel qa60 band
def getCloudsQA60(image):
image = image.select('QA60').divide(ee.Number(2).pow(10)) #bits 10 and 11 denote cloud and cirrus respectively so output will be 0=no cloud, 1=cloud, 2=cirrus, 4=both
image = image.mask(image)
return image
# gets the scenes for a specific area for a specific month for sentinel 1
def getMonthImageS1(month):
#filter the data by month and study area
monthIndex = ee.Number(ee.List(MONTHS).indexOf(month)).add(1);
scenes = SENTINEL1.filterBounds(study_area).filter(ee.Filter.calendarRange(monthIndex,monthIndex,"month")) # filter for the study area and the month
#mosaic the data
mosaic = scenes.mean()
return ee.Image(mosaic).copyProperties(ee.Image(scenes.first())) #copy the metadata properties of the first (arbritrary) image to the mosaic
# gets the study area
def getStudyArea(area):
#get the extent of the passed area
global study_area
study_area = getExtent(area["centroid"], area["scale"])
return study_area
# shows a single image
def getThumbUrl(image, area, bands, size=300, min=0, max=5000):
study_area = getStudyArea(area)
url = image.getThumbUrl({'region': study_area.getInfo(), 'bands': bands,'min': min, 'max': max, 'dimensions': size})
return url
# gets urls for sentinel images for the area for the specific month
def showImages(area, months, bands, size=300):
#get the study area
study_area = getStudyArea(area)
#convert the months from a client side to server side object and get the images
images = ee.List(months).map(getMonthImage)
urls = []
#get the values for the max depending on what bands we want
if (bands in ["hue","saturation","value"]):
max = 1
else:
max = SCALING_FACTOR
#get the thumbnail urls for each monthly image
for i in range(images.size().getInfo()):
# get the ith monthlyimage
img = ee.Image(ee.ImageCollection(images).toList(1,i).get(0))
_url = img.getThumbUrl({'region': study_area.getInfo(), 'bands': bands,'max': max, 'dimensions': size})
urls.append({'month': months[i],'url': _url, 'zenith': img.get("MEAN_SOLAR_ZENITH_ANGLE").getInfo()})
#get the html to write to the output cell
display(HTML(data="""<style>div#notebook-container{width:95%;}div#menubar-container{width:65%;}div#maintoolbar-container{width:99%;}</style>"""))
html= "<table><tr>"
for i in range(len(urls)):
image = urls[i]
if (i%4==0):
html += "</tr><tr>"
html += "<td><div><div style='float:left'><img src='" + image["url"] + "'></div><div style='font-size:24px;position:absolute;color:white;padding:10px'>" + image["month"] + "</div><div>MEAN_SOLAR_ZENITH_ANGLE: " + str(int(image["zenith"])) + "</div></div></td>"
html += "</tr></table>"
#write the output cell
display(HTML(html))
# gets the urls for the sentinel images for each band
def showBands(area, month, bands, stretches={}, size=300):
# get the study area
study_area = getStudyArea(area)
# get the mosaic image
image = getMonthImage(month)
#get the thumbnail urls for each band
urls = []
for i in range(len(bands)):
band = bands[i]
bandIndex = BANDS.index(band)
bandname = BAND_NAMES[bandIndex]
bandResolution = BAND_RESOLUTIONS[bandIndex]
if band in stretches.keys(): # if the min/max values for specific bands have been passed then use them
min = stretches[band][0]
max = stretches[band][1]
else:
min = 0
max = SCALING_FACTOR
_url = ee.Image(image).getThumbUrl({'region': study_area.getInfo(), 'bands': band,'min': min, 'max': max, 'dimensions': size})
urls.append({'band': band + " " + bandname + " (" + str(bandResolution) + "m)", 'url': _url})
#get the html to write to the output cell
display(HTML(data="""<style>div#notebook-container{width:95%;}div#menubar-container{width:65%;}div#maintoolbar-container{width:99%;}</style>"""))
html= "<table><tr>"
for i in range(len(urls)):
image = urls[i]
if (i%4==0):
html += "</tr><tr>"
html += "<td><div><div style='float:left'><img src='" + image["url"] + "'></div><div style='font-size:20px;position:absolute;color:white;padding:10px'>" + image["band"] + "</div></div></td>"
html += "</tr></table>"
#write the output cell
display(HTML(html))
# gets the sentinel 1 images
def showImagesS1(area, months, band, size=300):
#get the study area
study_area = getStudyArea(area)
#convert the months from a client side to server side object and get the images
images = ee.List(months).map(getMonthImageS1)
urls = []
#get the thumbnail urls for each monthly image
for i in range(images.size().getInfo()):
# get the ith monthlyimage
img = ee.Image(ee.ImageCollection(images).toList(1,i).get(0))
_url = img.getThumbUrl({'region': study_area.getInfo(), 'bands': band,'min': -20, 'max': 0, 'dimensions': size})
urls.append({'month': months[i],'url': _url})
#get the html to write to the output cell
display(HTML(data="""<style>div#notebook-container{width:95%;}div#menubar-container{width:65%;}div#maintoolbar-container{width:99%;}</style>"""))
html= "<table><tr>"
for i in range(len(urls)):
image = urls[i]
if (i%4==0):
html += "</tr><tr>"
html += "<td><div><div style='float:left'><img src='" + image["url"] + "'></div><div style='font-size:24px;position:absolute;color:white;padding:10px'>" + image["month"] + "</div></div></td>"
html += "</tr></table>"
#write the output cell
display(HTML(html))
def unsupervisedClassification(area, classes, size=300):
#get the study area
study_area = getStudyArea(area)
#get the maximum water extent
max_extent = GSW.select(["max_extent"]).updateMask(BATHYMETRY.select(["ice_surface"]).gte(-750))
#get the input imagery
s2c = SENTINEL2.filterBounds(study_area)
#reduce the amount of cloud in the composite
s2_all = s2c.reduce(ee.Reducer.percentile([30])).rename(["B1","B2","B3","B4","B5","B6","B7","B8","B8A","B9","B10","B11","B12","QA10","QA20","QA60"])
#mask with the high water mark
s2_masked = s2_all.mask(max_extent)
#train on some pixels
training = s2_masked.select(UNSUPERVISED_BANDS).sample(study_area.getInfo(),30)
#classify using the weka means clusterer
classified = s2_masked.cluster(ee.Clusterer.wekaKMeans(classes).train(training)).randomVisualizer()
url = classified.getThumbUrl({'region': study_area.getInfo(), 'dimensions': size})
html = "<img src='" + url + "'>"
display(HTML(html))
###Output
_____no_output_____
###Markdown
Sentinel 1 Synthetic Aperture Radar (SAR) Bands There are four polarisations on the SAR sensor (VV+VH,HH+HV,HH,VV) but they are exclusive. The following images show the mean backscatter for January and June.
###Code
showImagesS1(AREA_CHALK_SOUND, ["Jan","Jul"], "VV", 800)
###Output
_____no_output_____
###Markdown
Sentinel 2 Multi-Spectral Instrument (MSI) Bands The following image shows the distribution of bands in the Sentinel 2 sensor compared with Landsat sensors and the technical specification for the bands. Sentinel-2 BandsCentral Wavelength (µm)Resolution (m)Band 1 – Coastal aerosol0.44360Band 2 – Blue0.49010Band 3 – Green0.56010Band 4 – Red0.66510Band 5 – Vegetation Red Edge0.70520Band 6 – Vegetation Red Edge0.74020Band 7 – Vegetation Red Edge0.78320Band 8 – NIR0.84210Band 8A – Narrow NIR0.86520Band 9 – Water vapour0.94560Band 10 – SWIR – Cirrus1.37560Band 11 – SWIR1.61020Band 12 – SWIR2.19020 The images below show all of the band images for the Turks and Caicos.
###Code
showBands(AREA_CHALK_SOUND, "Jan", BANDS, {"B10":[0,100]})
# showBands(AREA_CHALK_SOUND, "Jan", ["B1"])
# showBands(AREA_CHALK_SOUND, "Jan", BANDS, {"B1":[1000,4000],"B4":[0,4000],"B6":[0,1000],"B7":[0,1000],"B8":[0,1000],"B8A":[0,1000],"B9":[0,1000],"B10":[0,1000],"B11":[0,1000]}, 400)
###Output
_____no_output_____
###Markdown
Intertidal Examples An example mosaic image for the Turks and Caicos Islands for January is shown below:
###Code
showImages(AREA_CHALK_SOUND, ["Jan"], "B4,B3,B2", 800)
###Output
_____no_output_____
###Markdown
An example for a single band (in this case the blue band) is shown below. Factors affecting spectral properties Changes in the surface spectral properties are due to differences in solar illumination, cloud contamination, habitat types, conditions and seasonality. They are also due to differences in the physical properties of the water, e.g. to the turbidity. Water depth The spectral properties will be affected by the depth of the water, including whether water is present or not for intertidal zones. This will depend on the height of the tide. Solar illumination The following example shows the variability in the illumination of the surface.
###Code
# showImages(AREA_CHALK_SOUND, ["Jan","Aug"], "B4,B3,B2", 600)
showImages(AREA_CHALK_SOUND, MONTHS, "B4,B3,B2", 300)
showImages(AREA_CHALK_SOUND, ["Jan","Aug"], "hue", 600)
showImages(AREA_CHALK_SOUND, ["Jan","Aug"], "saturation", 600)
showImages(AREA_CHALK_SOUND, ["Jan","Aug"], "value", 600)
###Output
_____no_output_____
###Markdown
Clouds Water vapor (B9) The Water Vapor band is supposed to show water vapor but it looks like it shows other things.
###Code
image = ee.Image("COPERNICUS/S2/20160817T152642_20160817T202520_T19QBE")
url = getThumbUrl(image, BIG_POND, "B4,B3,B2",500)
url2 = getThumbUrl(image, BIG_POND, "B9", 500, 0, 800)
html= "<table><tr><td><img src='" + url + "'></td><td><img src='" + url2 + "'></td></tr></table>"
display(HTML(html))
###Output
_____no_output_____
###Markdown
Cirrus band (B10) The Cirrus band looks really good - here is the data for the above scene. Not sure what the line is going down the image!
###Code
image = ee.Image("COPERNICUS/S2/20160817T152642_20160817T202520_T19QBE")
url = getThumbUrl(image, BIG_POND, "B4,B3,B2",500)
url2 = getThumbUrl(image, BIG_POND, "B10", 500, 0, 300)
html= "<table><tr><td><img src='" + url + "'></td><td><img src='" + url2 + "'></td></tr></table>"
display(HTML(html))
###Output
_____no_output_____
###Markdown
QA band (QA60) The QA60 band holds information on cumulus and cirrus clouds and this information is held in bits 10 (cumulus) and 11 (cirrus). A full description of the cloud mask is given here. The following example shows detection of the cumulus (dark grey) and cirrus (light grey) in a single S2 image. As you can see the cloud detection is not complete.
###Code
image = ee.Image("COPERNICUS/S2/20160817T152642_20160817T202520_T19QBE")
url = getThumbUrl(image, BIG_POND, "B4,B3,B2",500)
url2 = getThumbUrl(getCloudsQA60(image), BIG_POND, "QA60", 500, 0, 4)
html= "<table><tr><td><img src='" + url + "'></td><td><img src='" + url2 + "'></td></tr></table>"
display(HTML(html))
###Output
_____no_output_____
###Markdown
Cloud shadow Habitat types Habitat condition Turbidity Habitat Classification Unsupervised The following image shows an unsupervised classification using the Weka K-means clustering algorithm with 15 classes. The bands that are being used in the unsupervised classification are red, green, blue, vegetation red edge, nir, narrow nir and swir 1 and 2
###Code
unsupervisedClassification(BOTTLE_CREEK, 15, 500)
###Output
_____no_output_____ |
moa/kaggle-moa.ipynb | ###Markdown
Table of Contents1 References2 Load Data3 Preprocessing4 Folds5 Training5.1 Preparation5.2 Deep multilabel model keras5.3 Deep multilabel model torch5.4 Term model5.5 Zero class prediction model5.6 Error class prediction model5.7 Calibration5.8 Blender model5.9 Manual tuning6 Public models6.1 keras NN +PCA with Label smoothing CV[0.01562] LB [0.01859]6.2 Pytorch-RankGauss-PCA-NN CV [0.014572] LB [0.01839]6.3 MODEL1 CV [0.01562060391771847] LB [0.01833]7 Auto Tuning7.1 Error class7.2 Zero class7.3 Blender7.4 Run!8 Final prediction9 Submission10 Error analysis11 Offline vs Public Referenceshttps://www.kaggle.com/sinamhd9/mechanisms-of-action-moa-tutorial/datascalers: https://www.kaggle.com/liuhdme/moa-competitionkeras: https://www.kaggle.com/riadalmadani/keras-nn-pca-with-label-smoothing/commentsblending: https://www.kaggle.com/c/lish-moa/notebooks?competitionId=19988&sortBy=scoreDescendingClasses info:https://docs.google.com/spreadsheets/d/1NVPfqcJKWd-Oes610N-wHMYOKpO2WiH6PU7wFqZyNX8 Что можно поделать:1. (+) Модель предсказывающая все нули или нет2. Модель определения termов из названий классов3. Кластеризация + Y-decode4. G,C - autoencoders5. Imbalance6. Class cleaning, только в train!7. (+) Отдельная модель на классы с ошибками Load Data
###Code
# Importing useful libraries
# pip install jupyter_contrib_nbextensions lightgbm iterative-stratification tensorflow tensorflow-addons mlflow kaggle jupyter hyperopt
import pickle
from os.path import isfile
from functools import partial
import warnings
warnings.filterwarnings("ignore")
# Adding iterative-stratification
# Select add data from the right menu and search for iterative-stratification, then add it to your kernel.
import sys
import os
import random
sys.path.append('../input/iterative-stratification/iterative-stratification-master')
from iterstrat.ml_stratifiers import MultilabelStratifiedKFold
sys.path.append('../input/rank-gauss')
from gauss_rank_scaler import GaussRankScaler
from time import time
import datetime
import gc
import collections
import numpy as np
import pandas as pd
# ML tools
import tensorflow as tf
import tensorflow_addons as tfa
from tensorflow.keras import layers
import tensorflow.keras.backend as K
from sklearn.metrics import log_loss
from tensorflow_addons.layers import WeightNormalization
# Setting random seeds
def seed_everything(seed=42):
random.seed(seed)
os.environ['PYTHONHASHSEED'] = str(seed)
np.random.seed(seed)
tf.random.set_seed(seed)
torch.manual_seed(seed)
torch.cuda.manual_seed(seed)
torch.backends.cudnn.deterministic = True
seed_everything(seed=42)
# Visualization tools
import seaborn as sns
import matplotlib.pyplot as plt
sns.set_style('white')
sns.set(font_scale=1.2)
import requests
has_internet = True
try:
r = requests.get('http://datadigger.ru', timeout=3)
!pip install -q mlflow
import mlflow
mlflow.set_tracking_uri('http://datadigger.ru:5000')
except Exception:
has_internet = False
def dict_flatten(d, parent_key='', sep='_'):
items = []
for k, v in d.items():
new_key = parent_key + sep + k if parent_key else k
if isinstance(v, collections.MutableMapping):
items.extend(dict_flatten(v, new_key, sep=sep).items())
else:
items.append((new_key, v))
return dict(items)
df_train = pd.read_csv('/kaggle/input/lish-moa/train_features.csv')
print('train data size', df_train.shape)
df_drug = pd.read_csv('/kaggle/input/lish-moa/train_drug.csv')
print('train drug size', df_drug.shape)
df_target_ns = pd.read_csv('/kaggle/input/lish-moa/train_targets_nonscored.csv')
print('train target nonscored size', df_target_ns.shape)
df_target_s = pd.read_csv('/kaggle/input/lish-moa/train_targets_scored.csv')
print('train target scored size', df_target_s.shape)
df_target_s = df_target_s.merge(df_drug, on=['sig_id'])
print('train target scored size', df_target_s.shape)
df_test = pd.read_csv('/kaggle/input/lish-moa/test_features.csv')
print('test data size', df_test.shape)
df_sample = pd.read_csv('/kaggle/input/lish-moa/sample_submission.csv')
print('sample submission size', df_sample.shape)
def g_c_features(df):
g_features = [cols for cols in df.columns if cols.startswith('g-')]
c_features = [cols for cols in df.columns if cols.startswith('c-')]
return g_features, c_features
def preprocess(df):
df['cp_time'] = df['cp_time'].map({24:1, 48:2, 72:3})
df['cp_dose'] = df['cp_dose'].map({'D1':0, 'D2':1})
df['cp_type'] = df['cp_type'].map({'trt_cp': 0, 'ctl_vehicle': 1})
return df
X = preprocess(df_train)
X_test = preprocess(df_test)
ind_te = X_test[X_test['cp_type']==1].index
y = df_target_s.drop('sig_id', axis=1)
y0 = df_target_ns.drop('sig_id', axis=1)
print('New data shape', X.shape)
def perc_empty(df):
df = df.drop(['drug_id'], axis=1, errors='ignore')
return 100 * (1 - len(df[(df.T != 0).any()]) / len(df))
print("Scored без класса: {}%".format(round(perc_empty(y), 3)))
print("NoScored без класса: {}%".format(round(perc_empty(y0), 3)))
print("Нет никакого класса: {}%".format(round(perc_empty(pd.concat([y, y0], axis=1)), 3)))
###Output
train data size (23814, 876)
train drug size (23814, 2)
train target nonscored size (23814, 403)
train target scored size (23814, 207)
train target scored size (23814, 208)
test data size (3982, 876)
sample submission size (3982, 207)
New data shape (21948, 875)
Scored без класса: 34.176%
NoScored без класса: 79.087%
Нет никакого класса: 16.694%
###Markdown
Preprocessing
###Code
from sklearn.decomposition import PCA
from sklearn.preprocessing import QuantileTransformer, StandardScaler
from sklearn.cluster import KMeans
from sklearn.feature_selection import VarianceThreshold
def variance_reduction(X, X_test, **params):
if not params.get('enabled', True):
return X, X_test
thresh = params.get('threshold', 0.8)
columns_to_skip = [
c for c in ['drug_id', 'cp_type', 'cp_time','cp_dose']
if c in set(X.columns)
]
cols_num = len(columns_to_skip)
var_thresh = VarianceThreshold(thresh)
data = X.append(X_test)
try:
data_transformed = var_thresh.fit_transform(data.iloc[:, cols_num:])
except Exception as e:
print(e, str(thresh))
return X, X_test
train_features_transformed = data_transformed[ : X.shape[0]]
test_features_transformed = data_transformed[-X_test.shape[0] : ]
X = pd.DataFrame(
X[columns_to_skip].values.reshape(-1, cols_num),
columns=columns_to_skip
)
X = pd.concat([X, pd.DataFrame(train_features_transformed)], axis=1)
X_test = pd.DataFrame(
X_test[columns_to_skip].values.reshape(-1, cols_num),
columns=columns_to_skip
)
X_test = pd.concat([X_test, pd.DataFrame(test_features_transformed)], axis=1)
return X, X_test
def fe_cluster(train, test, **params):
if not params.get('enabled', True):
return train, test
n_clusters_g = params.get('n_clusters_g', 35)
n_clusters_c = params.get('n_clusters_c', 5)
random_state = params.get('seed', 299)
features_g, features_c = g_c_features(train)
def create_cluster(train, test, features, kind = 'g', n_clusters = n_clusters_g):
train_ = train[features].copy()
test_ = test[features].copy()
data = pd.concat([train_, test_], axis = 0)
kmeans = KMeans(n_clusters = n_clusters, random_state = random_state).fit(data)
train[f'clusters_{kind}'] = kmeans.labels_[:train.shape[0]]
test[f'clusters_{kind}'] = kmeans.labels_[train.shape[0]:]
train = pd.get_dummies(train, columns = [f'clusters_{kind}'])
test = pd.get_dummies(test, columns = [f'clusters_{kind}'])
return train, test
train, test = create_cluster(train, test, features_g, kind = 'g', n_clusters = n_clusters_g)
train, test = create_cluster(train, test, features_c, kind = 'c', n_clusters = n_clusters_c)
return train, test
def quantile_transformer(df, df_test, **params):
random_state = params.get('seed', 42)
g_features, c_features = g_c_features(df)
for col in (g_features + c_features):
transformer = QuantileTransformer(n_quantiles=100, random_state=random_state, output_distribution='normal')
vec_len = len(df[col].values)
raw_vec = df[col].values.reshape(vec_len, 1)
vec_len_test = len(df_test[col].values)
raw_vec_test = df_test[col].values.reshape(vec_len_test, 1)
transformer.fit(raw_vec)
df[col] = transformer.transform(raw_vec).reshape(1, vec_len)[0]
df_test[col] = transformer.transform(raw_vec_test).reshape(1, vec_len_test)[0]
return df, df_test
def rank_gauss(df, **params):
g_features, c_features = g_c_features(df)
cols_numeric = g_features + c_features
df[cols_numeric] = GaussRankScaler().fit_transform(df[cols_numeric])
return df
def standard_scaler(df, df_test, **params):
g_features, c_features = g_c_features(df)
cols_numeric = g_features + c_features
scaler = StandardScaler()
df[cols_numeric] = scaler.fit_transform(df[cols_numeric])
df_test[cols_numeric] = scaler.transform(df_test[cols_numeric])
return df, df_test
def fe_cluster_pca(train, test, n_clusters=5, seed = 42):
pca_g_cols = [c for c in train.columns if c.startswith('pca_g-')]
pca_c_cols = [c for c in train.columns if c.startswith('pca_c-')]
train_gpca = train[pca_g_cols]
test_gpca = test[pca_g_cols]
train_cpca = train[pca_c_cols]
test_cpca = test[pca_c_cols]
train_pca=pd.concat((train_gpca,train_cpca),axis=1)
test_pca=pd.concat((test_gpca,test_cpca),axis=1)
data=pd.concat([train_pca,test_pca],axis=0)
kmeans = KMeans(n_clusters = n_clusters, random_state = seed).fit(data)
train[f'clusters_pca'] = kmeans.labels_[:train.shape[0]]
test[f'clusters_pca'] = kmeans.labels_[train.shape[0]:]
train = pd.get_dummies(train, columns = [f'clusters_pca'])
test = pd.get_dummies(test, columns = [f'clusters_pca'])
return train, test
def pca_transformer(X, X_test, **params):
# Please see reference 3 for this part
if not params.get('enabled', True):
return X, X_test
random_state = params.get('seed', 42)
n_comp_cells = params.get('n_comp_cells', 0.95)
n_comp_genes = params.get('n_comp_genes', 0.95)
# print(f'pca {n_comp_cells}, {n_comp_genes}')
g_features, c_features = g_c_features(X)
data = pd.concat([pd.DataFrame(X[g_features]), pd.DataFrame(X_test[g_features])])
data2 = (PCA(n_comp_genes, random_state=random_state).fit_transform(data[g_features]))
train2 = data2[:X.shape[0]]
test2 = data2[-X_test.shape[0]:]
train2 = pd.DataFrame(train2, columns=[f'pca_g-{i}' for i in range(data2.shape[1])])
test2 = pd.DataFrame(test2, columns=[f'pca_g-{i}' for i in range(data2.shape[1])])
X = pd.concat((X, train2), axis=1)
X_test = pd.concat((X_test, test2), axis=1)
data = pd.concat([pd.DataFrame(X[c_features]), pd.DataFrame(X_test[c_features])])
data2 = (PCA(n_comp_cells, random_state=random_state).fit_transform(data[c_features]))
train2 = data2[:X.shape[0]]
test2 = data2[-X_test.shape[0]:]
train2 = pd.DataFrame(train2, columns=[f'pca_c-{i}' for i in range(data2.shape[1])])
test2 = pd.DataFrame(test2, columns=[f'pca_c-{i}' for i in range(data2.shape[1])])
X = pd.concat((X, train2), axis=1)
X_test = pd.concat((X_test, test2), axis=1)
clusters = params.get('n_clusters', 0)
if clusters:
X, X_test = fe_cluster_pca(X, X_test, n_clusters=clusters, seed=random_state)
return X, X_test
gsquarecols=['g-574','g-211','g-216','g-0','g-255','g-577','g-153','g-389','g-60','g-370','g-248','g-167','g-203','g-177','g-301','g-332','g-517','g-6','g-744','g-224','g-162','g-3','g-736','g-486','g-283','g-22','g-359','g-361','g-440','g-335','g-106','g-307','g-745','g-146','g-416','g-298','g-666','g-91','g-17','g-549','g-145','g-157','g-768','g-568','g-396']
def fe_stats(train, test, **params):
if not params.get('enabled', True):
return train, test
features_g, features_c = g_c_features(train)
for df in train, test:
df['g_sum'] = df[features_g].sum(axis = 1)
df['g_mean'] = df[features_g].mean(axis = 1)
df['g_std'] = df[features_g].std(axis = 1)
df['g_kurt'] = df[features_g].kurtosis(axis = 1)
df['g_skew'] = df[features_g].skew(axis = 1)
df['c_sum'] = df[features_c].sum(axis = 1)
df['c_mean'] = df[features_c].mean(axis = 1)
df['c_std'] = df[features_c].std(axis = 1)
df['c_kurt'] = df[features_c].kurtosis(axis = 1)
df['c_skew'] = df[features_c].skew(axis = 1)
df['gc_sum'] = df[features_g + features_c].sum(axis = 1)
df['gc_mean'] = df[features_g + features_c].mean(axis = 1)
df['gc_std'] = df[features_g + features_c].std(axis = 1)
df['gc_kurt'] = df[features_g + features_c].kurtosis(axis = 1)
df['gc_skew'] = df[features_g + features_c].skew(axis = 1)
df['c52_c42'] = df['c-52'] * df['c-42']
df['c13_c73'] = df['c-13'] * df['c-73']
df['c26_c13'] = df['c-23'] * df['c-13']
df['c33_c6'] = df['c-33'] * df['c-6']
df['c11_c55'] = df['c-11'] * df['c-55']
df['c38_c63'] = df['c-38'] * df['c-63']
df['c38_c94'] = df['c-38'] * df['c-94']
df['c13_c94'] = df['c-13'] * df['c-94']
df['c4_c52'] = df['c-4'] * df['c-52']
df['c4_c42'] = df['c-4'] * df['c-42']
df['c13_c38'] = df['c-13'] * df['c-38']
df['c55_c2'] = df['c-55'] * df['c-2']
df['c55_c4'] = df['c-55'] * df['c-4']
df['c4_c13'] = df['c-4'] * df['c-13']
df['c82_c42'] = df['c-82'] * df['c-42']
df['c66_c42'] = df['c-66'] * df['c-42']
df['c6_c38'] = df['c-6'] * df['c-38']
df['c2_c13'] = df['c-2'] * df['c-13']
df['c62_c42'] = df['c-62'] * df['c-42']
df['c90_c55'] = df['c-90'] * df['c-55']
for feature in features_c:
df[f'{feature}_squared'] = df[feature] ** 2
for feature in gsquarecols:
df[f'{feature}_squared'] = df[feature] ** 2
return train, test
def preprocess_X(params, X, X_test, y, y0, seed=42):
p = params
p_scaler = p['scaler']
p_pca = p['pca']
p_fe_cluster = p['fe_cluster']
p_fe_stats = p['fe_stats']
p_variance_reduction = p['variance_reduction']
# print(X.shape, 'initial')
if p_scaler == 'quantile':
X, X_test = quantile_transformer(X, X_test, seed=seed)
elif p_scaler == 'gauss':
X = rank_gauss(X)
X_test = rank_gauss(X_test)
elif p_scaler == 'standard':
X, X_test = standard_scaler(X, X_test)
elif p_scaler != 'none':
raise Exception(f'Unknown scaler: {p_scaler}')
# print(X.shape, 'scaler')
X, X_test = pca_transformer(X, X_test, seed=seed, **p_pca)
# print(X.shape, 'pca')
X, X_test = fe_cluster(X, X_test, seed=seed, **p_fe_cluster)
# print(X.shape, 'cluster')
X, X_test = fe_stats(X, X_test, **p_fe_stats)
# print(X.shape, 'fe_stats')
X, X_test = variance_reduction(X, X_test, **p_variance_reduction)
# print(X.shape, 'variance')
if p.get("shuffle_cols", True):
X, X_test = shuffle_cols(X, X_test)
y0 = y0[X['cp_type'] == 0].reset_index(drop = True)
y = y[X['cp_type'] == 0].reset_index(drop = True)
X = X[X['cp_type'] == 0].reset_index(drop = True)
X = X.drop(['sig_id'], axis=1)
X_test = X_test.drop(['sig_id'], axis=1)
return X, X_test, y, y0
def shuffle_cols(train, test):
# В зависимости от seed перемешиваем фичи
features_shrink = 1.
inp_size = int(np.ceil(features_shrink * len(train.columns)))
split_cols = np.random.choice(train.columns, inp_size, replace=False)
return train[split_cols], test[split_cols]
###Output
_____no_output_____
###Markdown
Folds
###Code
from sklearn.model_selection import KFold
def train_test_split(X_f, y_f, n_split=7, seed=42):
return fold(X_f, y_f, n_split, seed)[0]
def fold_simple(X_f, y_f, n_split, seed):
if len(y_f.columns) > 1:
f = MultilabelStratifiedKFold(n_splits = n_split, random_state = seed, shuffle = True)
return list(f.split(X_f, y_f))
else:
f = StratifiedKFold(n_splits = n_split, random_state=seed, shuffle = True)
return list(f.split(X_f, y_f))
def fold_drug(X_f, y_f, drug_thresh=18, n_split=7, seed=42):
vc = y_f.drug_id.value_counts()
vc1 = vc.loc[vc <= drug_thresh].index.sort_values()
vc2 = vc.loc[vc > drug_thresh].index.sort_values()
target_cols = [c for c in y_f.columns if c != 'drug_id']
# Сначала бьём на фолды лекарства
# STRATIFY DRUGS 18X OR LESS
skf1 = MultilabelStratifiedKFold(n_splits=n_split, shuffle=True, random_state=seed)
tmp1 = y_f.groupby('drug_id').mean().loc[vc1]
split_1 = list(skf1.split(tmp1, tmp1[target_cols]))
# STRATIFY DRUGS MORE THAN 18X
skf2 = MultilabelStratifiedKFold(n_splits=n_split, shuffle=True, random_state=seed)
tmp2 = y_f.loc[y_f.drug_id.isin(vc2)].reset_index()
split_2 = list(skf2.split(tmp2[target_cols], tmp2[target_cols]))
folds = []
for i in range(n_split):
ind_tr_drug, ind_val_drug = split_1[i]
tr_drug, val_drug = tmp1.iloc[ind_tr_drug].index, tmp1.iloc[ind_val_drug].index
ind_tr_1, ind_val_1 = y_f.loc[y_f.drug_id.isin(tr_drug)].index, y_f.loc[y_f.drug_id.isin(val_drug)].index
ind_tr_2, ind_val_2 = split_2[i]
ind_tr_2, ind_val_2 = tmp2.iloc[ind_tr_2]['index'], tmp2.iloc[ind_val_2]['index']
ind_tr = np.concatenate([ind_tr_1, ind_tr_2])
ind_val = np.concatenate([ind_val_1, ind_val_2])
folds.append((ind_tr, ind_val))
return folds
fold = fold_simple
###Output
_____no_output_____
###Markdown
Training Preparation
###Code
from tensorflow.keras import regularizers
from tensorflow.keras.callbacks import ReduceLROnPlateau, ModelCheckpoint, EarlyStopping
from sklearn.model_selection import StratifiedKFold
p_min = 1e-10
p_max = 1-1e-10
def y_to_zero_class(df_y):
return pd.DataFrame(df_y.max(axis=1).map({1: 0, 0: 1}))
def log_loss_metric(y_true, y_pred, columns=None):
metrics = []
y_pred = np.clip(y_pred, p_min, p_max)
cols = y_true.columns if columns is None else columns
for _target in cols:
metrics.append(
log_loss(
y_true.loc[:, _target],
y_pred.loc[:, _target].astype(float),
labels = [0, 1]
)
)
return np.mean(metrics)
def log_loss_result(y_true, y_pred):
return log_loss_metric(y_true, y_pred, columns=y.columns)
def logloss(y_true, y_pred):
y_pred = tf.clip_by_value(y_pred,p_min,p_max)
return -K.mean(y_true*K.log(y_pred) + (1-y_true)*K.log(1-y_pred))
checkpoint_path = "model.h5"
def callbacks(verbose=0):
rlr = ReduceLROnPlateau(
monitor = 'val_logloss', factor = 0.1, patience = 3, verbose = verbose,
min_delta=1e-4, mode = 'min'
)
# ckp = ModelCheckpoint(
# checkpoint_path, monitor = 'val_logloss', verbose = 0,
# save_best_only = True, mode = 'min'
# )
es = EarlyStopping(
monitor = 'val_logloss', min_delta = 1e-5, patience = 10, mode = 'min',
baseline = None, restore_best_weights = True, verbose = verbose
)
return rlr, es#, ckp
def y_arr_to_df(y_arr, cols=None):
if cols is None:
cols = y.columns
return pd.DataFrame(y_arr, columns=cols)
error_classes = [
'cyclooxygenase_inhibitor',
'dopamine_receptor_antagonist',
'glutamate_receptor_antagonist',
'adrenergic_receptor_antagonist',
'dna_inhibitor'
]
def y_to_error_classes(y_df):
# y_df_wo = pd.DataFrame(y_df.drop(error_classes, axis=1).max(axis=1))
# y_df_wo.columns = ['other']
y_df_w = y_df[error_classes]
# y_res = pd.concat([y_df_wo, y_df_w], axis=1)
return y_df_w
y_to_error_classes(y).sum(axis=0)
###Output
_____no_output_____
###Markdown
Deep multilabel model keras
###Code
from sklearn.utils import class_weight
batch_size = 128
def create_model(
num_cols_x, num_cols_y, hid_layers,
activations, dropout_rate,
lr, label_smoothing,
weight_decay=1e-5,
batch_norm=True, weight_norm=True
):
inp1 = tf.keras.layers.Input(shape = (num_cols_x, ))
x1 = inp1
if batch_norm:
x1 = tf.keras.layers.BatchNormalization()(x1)
x1 = tf.keras.layers.Dropout(dropout_rate[0])(x1)
for i, units in enumerate(hid_layers):
activation = activations[i]
if activation == 'leaky_relu':
dense = tf.keras.layers.Dense(units)
else:
dense = tf.keras.layers.Dense(units, activation=activation)
if weight_norm and weight_norm != 'output':
x1 = WeightNormalization(dense)(x1)
else:
x1 = dense(x1)
if activation == 'leaky_relu':
x1 = tf.keras.layers.LeakyReLU(alpha=0.01)(x1)
x1 = tf.keras.layers.Dropout(dropout_rate[i])(x1)
if batch_norm:
x1 = tf.keras.layers.BatchNormalization()(x1)
out_dense = tf.keras.layers.Dense(num_cols_y, activation='sigmoid')
if weight_norm:
x1 = WeightNormalization(out_dense)(x1)
else:
x1 = out_dense(x1)
model = tf.keras.models.Model(inputs=inp1, outputs=x1)
opt = tfa.optimizers.AdamW(learning_rate=lr, weight_decay=weight_decay)
# opt = tfa.optimizers.Lookahead(opt, sync_period=10)
model.compile(
optimizer=opt,
loss=tf.keras.losses.BinaryCrossentropy(label_smoothing=label_smoothing),
metrics=logloss
)
return model
EPOCHS = None
class DeepMultiLabelModel(object):
def __init__(self, params, model_name, weights_from=None, verbose=0, seed=42):
self.deep_params = params
self.model_name = model_name
self.seed = seed
self.num_epochs = EPOCHS or params['epochs']
self.model = None
self.history = None
self.weights_from = None
self.verbose = verbose
self.cv_models = []
self.classes = []
@property
def real_model(self):
if self.model is not None:
return self
if self.cv_models:
return self.cv_models[0]
return None
@property
def definition(self):
model = self.real_model
model_info = []
if model:
model.model.summary(print_fn=model_info.append)
return "\n".join(model_info)
@property
def train_history(self):
model = self.real_model
if model is None:
return None
return model.history
def fit(self, x_tr, y_tr, x_val, y_val, y0_tr, y0_val):
inp_size = x_tr.shape[1]
y_shape = y_val.shape[1]
hid_layer = self.deep_params['hid_layer']
activation = self.deep_params['activation']
dropout = self.deep_params['dropout']
learning_rate = self.deep_params['learning_rate']
label_smoothing = self.deep_params['label_smoothing']
batch_norm = self.deep_params.get('batch_norm', True)
weight_norm = self.deep_params.get('weight_norm', True)
init_non_scored_weights = self.deep_params.get('init_non_scored_weights', True)
cls_weight = self.deep_params.get('class_weight')
weight_decay = self.deep_params.get('weight_decay', 1e-5)
# Scored модель
model = create_model(
inp_size, y_shape, hid_layer, activation, dropout,
learning_rate, label_smoothing,
weight_decay=weight_decay,
batch_norm=batch_norm, weight_norm=weight_norm
)
if init_non_scored_weights:
ns_y_tr = y0_tr
ns_y_val = y0_val
if init_non_scored_weights == 'ALL_TARGETS':
ns_y_tr = np.hstack([y_tr, y0_tr])
ns_y_val = np.hstack([y_val, y0_val])
# Non-scored модель
model0 = create_model(
inp_size, ns_y_val.shape[1], hid_layer, activation, dropout,
learning_rate, label_smoothing,
weight_decay=weight_decay,
batch_norm=batch_norm, weight_norm=weight_norm
)
model0.fit(
x_tr, ns_y_tr, validation_data=(x_val, ns_y_val),
epochs = self.num_epochs, batch_size = batch_size,
callbacks = callbacks(self.verbose), verbose = self.verbose
)
# Transfer weights
for i in range(len(model.layers)-1):
model.layers[i].set_weights(model0.layers[i].get_weights())
self.history = model.fit(
x_tr, y_tr, validation_data=(x_val, y_val),
epochs = self.num_epochs, batch_size = batch_size,
callbacks = callbacks(self.verbose), verbose = self.verbose,
class_weight = cls_weight
)
self.model = model
def predict(self, X):
if isinstance(X, pd.DataFrame):
X = X.astype('float64').values
if self.cv_models:
preds = []
for model in self.cv_models:
preds.append(model.predict(X))
return np.mean(preds, axis=0)
else:
return self.model.predict(X)
def cv(self, X, y, y0, run_name=None, n_split=7, return_pred=False,
metric_fn=log_loss_metric, overfit=True, run_tags=None, max_score=None):
seed = self.seed
self.cv_models = []
splits = fold(X, y, n_split=n_split, seed=self.seed)
y = y.drop(columns=['drug_id'], errors='ignore')
y0 = y0.drop(columns=['drug_id'], errors='ignore')
ycols = y.columns
self.classes = ycols
yvals = y.astype(float).values
y0vals = y0.astype(float).values
X_vals = X.astype('float64').values
test_pred = pd.DataFrame(index=X.index, columns=ycols)
test_pred.loc[:,:] = 0
model_def = None
initial_time = time()
for n, (tr, te) in enumerate(splits):
start_time = time()
# Обучающая,Валидационная выборка
x_tr, x_test = X_vals[tr], X_vals[te]
# Y предварительной задачи (non-scored)
y0_tr, y0_test = y0vals[tr], y0vals[te]
# Y основной задачи
y_tr, y_test = yvals[tr], yvals[te]
# Разбиваем обучение еще на train/val
if not overfit:
ind_tr, ind_val = train_test_split(pd.DataFrame(x_tr), pd.DataFrame(y_tr))
x_tr, x_val = x_tr[ind_tr], x_tr[ind_val]
y_tr, y_val = y_tr[ind_tr], y_tr[ind_val]
y0_tr, y0_val = y0_tr[ind_tr], y0_tr[ind_val]
else:
x_val = x_test
y_val = y_test
y0_val = y0_test
model = DeepMultiLabelModel(
self.deep_params, model_name=self.model_name,
verbose=self.verbose, seed=self.seed
)
model.fit(x_tr, y_tr, x_val, y_val, y0_tr, y0_val)
model_def = model.definition
test_pred.loc[te, ycols] = model.predict(x_test)
self.cv_models.append(model)
oof = metric_fn(y.loc[te, ycols], test_pred.loc[te, ycols])
print(f'[{str(datetime.timedelta(seconds = time() - start_time))[2:7]}], Fold {n}: {oof}')
if max_score and oof > max_score:
print(f'break cv execution {oof} > {max_score}')
if not return_pred:
return None
else:
return None, None
logloss_valid = metric_fn(y, test_pred)
print(f'Valid logloss: {logloss_valid}')
if has_internet and run_name:
mlflow.set_experiment('Kaggle-MOA-{}'.format(self.model_name))
with mlflow.start_run(run_name=run_name):
mlflow.log_params(dict_flatten({
'n_split': n_split,
'p_min': p_min,
'p_max': p_max,
'nn': self.deep_params
}))
mlflow.log_metric(key="logloss_valid", value=logloss_valid)
run_tags = run_tags or {}
run_tags.update({
'model_def': model_def,
'run': run_name,
'run_time': time() - initial_time
})
mlflow.set_tags(run_tags)
if not return_pred:
return logloss_valid
else:
return logloss_valid, test_pred
def errors(self, X, y):
# Количество ошибок, когда должно быть везде 0, а на самом деле - нет
# Количество ошибок, когда должно что-то быть, а на самом деле везде 0
# Для уменьшения этих ошибок можно применять lgb_zero
y = y.drop(columns=['drug_id'], errors='ignore')
ycols = y.columns
seed_everything(self.seed)
_, te = train_test_split(X, y, n_split=5)
x_val = X_p.astype('float64').values[te]
y_true = y.loc[te, ycols].reset_index(drop=True)
y_pred = y_arr_to_df(self.predict(x_val), ycols)
non_zeros = y_true[(y_true.T != 0).any()].index
all_zeros = y_true[~(y_true.T != 0).any()].index
clip_p_min = 1e-5
clip_p_max = 1 - 1e-5
y_pred_clip = np.clip(y_pred, clip_p_min, clip_p_max)
print('Logloss:', log_loss_metric(y_true, y_pred))
print('Logloss all zeros:', log_loss_metric(y_true.loc[all_zeros, :], y_pred.loc[all_zeros, :]))
print('Logloss non zeros:', log_loss_metric(y_true.loc[non_zeros, :], y_pred.loc[non_zeros, :]))
print('Logloss clip:', log_loss_metric(y_true, y_pred_clip))
print('Logloss all zeros clip:', log_loss_metric(y_true.loc[all_zeros, :], y_pred_clip.loc[all_zeros, :]))
print('Logloss non zeros clip:', log_loss_metric(y_true.loc[non_zeros, :], y_pred_clip.loc[non_zeros, :]))
losses = []
for i in range(y_true.shape[1]):
y_true_cl = y_true.iloc[:,i]
y_pred_cl = y_pred.iloc[:,i]
losses.append({
"index": i,
"class": y_true_cl.name,
'true_0': len(y_true_cl[y_true_cl == 0]),
'true_1': len(y_true_cl[y_true_cl == 1]),
"loss": log_loss(y_true_cl.values, y_pred_cl.values, labels=[0, 1]),
'pred_hist_0': y_pred_cl[y_pred_cl <= 0.5].round(1).value_counts().sort_index().reset_index().values,
'pred_hist_1': y_pred_cl[y_pred_cl > 0.5].round(1).value_counts().sort_index().reset_index().values,
})
return pd.DataFrame(losses).set_index(['index', 'class']).sort_values('loss', ascending=False)
###Output
_____no_output_____
###Markdown
Deep multilabel model torch Term model
###Code
def split_to_terms(s):
return s.split('_')
terms = pd.DataFrame({'terms': y.columns.map(split_to_terms)}).explode('terms')['terms'].value_counts()
terms = list(terms[terms > 1].index)
terms_map = {t: i for i, t in enumerate(terms)}
def y_to_terms(y_df):
term_vals = []
for _, row in y_df.iterrows():
new_classes = [0] * len(terms)
terms_1 = set(pd.DataFrame(row[row > 0].index.map(split_to_terms)).explode(0)[0].values.tolist())
for term in terms_1:
if term not in terms_map:
continue
new_classes[terms_map[term]] = 1
term_vals.append(new_classes)
return pd.DataFrame(term_vals, columns=terms)
# preprocess_params, deep_params, _, _, _ = ({'fe_cluster': {'enabled': True, 'n_clusters_c': 6, 'n_clusters_g': 44}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': False}, 'scaler': 'none', 'shuffle_cols': True, 'variance_reduction': {'enabled': True, 'threshold': 0.9585049261745544}, 'use_zero_pred_model': True}, {'activation': ('selu', 'swish', 'swish'), 'dropout': (0.7, 0.7, 0.3), 'hid_layer': (1152, 1152, 2048), 'init_non_scored_weights': False, 'label_smoothing': 0.0007000000000000001, 'learning_rate': 0.016, 'epochs': 500}, {'threshold': 0}, 29, 1)
# term_model = DeepMultiLabelModel(deep_params, 'terms')
# y_term = y_to_terms(pd.concat([y, y0], axis=1))
# term_model.cv(X_p, y_term, y0, n_split=5)
###Output
_____no_output_____
###Markdown
Zero class prediction model
###Code
# X_p, X_test_p = preprocess_X(fe_params, X.copy(), X_test.copy())
# zero_model = DeepMultiLabelModel(nn_params, 'zero')
# y_zero = y_to_zero_class(y)
# zero_model.cv(X_p, y_zero, y0, n_split=7, run_name='tune_nn_1')
# zero_model.predict(X_test_p)
###Output
_____no_output_____
###Markdown
Error class prediction model
###Code
# fe_params, nn_params, _, seed, _ = (
# {'fe_cluster': {'enabled': True, 'n_clusters_c': 14, 'n_clusters_g': 39}, 'fe_stats': {'enabled': True}, 'pca': {'enabled': False}, 'scaler': 'quantile', 'shuffle_cols': True, 'variance_reduction': {'enabled': False}},
# {'activation': ('elu', 'elu', 'elu', 'elu'), 'batch_norm': True,
# 'dropout': (0.3, 0.3, 0.4, 0.3), 'epochs': 100, 'hid_layer': (512, 1024, 512, 2048),
# 'init_non_scored_weights': False, 'label_smoothing': 0.0001380444271082826,
# 'learning_rate': 0.4083831289327425, 'weight_norm': True,
# # 'class_weight': {0: 1, 1: 7, 2: 6, 3: 7, 4: 7, 5: 5}
# },
# {'zero_threshold': 0.9837308739197401}, 13, 2
# )
# seed_everything(seed)
# X_p, X_test_p = preprocess_X(fe_params, X.copy(), X_test.copy())
# error_model = DeepMultiLabelModel(nn_params, 'errors', verbose=0)
# y_error = y_to_error_classes(y)
# error_model.cv(X_p, y_error, y0, n_split=5)
# loss_df = error_model.errors(X_p, y_error)
# loss_df
###Output
_____no_output_____
###Markdown
Calibration
###Code
# https://github.com/cerlymarco/MEDIUM_NoteBook/blob/master/NeuralNet_Calibration/NeuralNet_Calibration.ipynb
def fit_TemperatureCalibration(train_X_y, valid_X_y=None, epochs=100):
### inspired by: https://github.com/stellargraph/stellargraph/blob/develop/stellargraph/calibration.py ###
T = tf.Variable(tf.ones(shape=(1,)))
history = []
early_stopping = False
optimizer = tf.keras.optimizers.Adam(learning_rate=0.001)
def cost(T, x, y):
scaled_logits = tf.multiply(x=x, y=1.0 / T)
cost_value = tf.reduce_mean(
tf.nn.softmax_cross_entropy_with_logits(logits=scaled_logits, labels=y)
)
return cost_value
def grad(T, x, y):
with tf.GradientTape() as tape:
cost_value = cost(T, x, y)
return cost_value, tape.gradient(cost_value, T)
X_train, y_train = train_X_y
if valid_X_y:
X_valid, y_valid = valid_X_y
early_stopping = True
for epoch in range(epochs):
train_cost, grads = grad(T, X_train, y_train)
optimizer.apply_gradients(zip([grads], [T]))
if early_stopping:
val_cost = cost(T, X_valid, y_valid)
if (len(history) > 0) and (val_cost > history[-1][1]):
break
else:
history.append([train_cost, val_cost, T.numpy()[0]])
else:
history.append([train_cost, T.numpy()[0]])
history = np.asarray(history)
temperature = history[-1, -1]
return temperature
def calibrated_proba(logits, temperature):
scaled_prediction = logits / temperature
return np.exp(scaled_prediction) / np.sum(np.exp(scaled_prediction), axis=-1, keepdims=True)
def calibrate(model, X, y, y0, run_name=None, n_split=7, metric_fn=log_loss_metric, overfit=True):
seed_everything(model.seed)
ind_tr, ind_te = train_test_split(X, y, n_split=n_split)
x_tr, y_tr, y0_tr = X.iloc[ind_tr, :].reset_index(drop=True), y.iloc[ind_tr, :].reset_index(drop=True), y0.iloc[ind_tr, :].reset_index(drop=True)
x_te, y_te, y0_te = X.iloc[ind_te, :].reset_index(drop=True), y.iloc[ind_te, :].reset_index(drop=True), y0.iloc[ind_te, :].reset_index(drop=True)
y_pred_te = model.predict(x_te)
print(metric_fn(y_te, y_arr_to_df(y_pred_te, y_te.columns)))
y_pred_tr = model.predict(x_tr)
calib_temperature = fit_TemperatureCalibration((y_pred_tr, y_tr), (y_pred_te, y_te))
print(calib_temperature)
print(y_pred_te)
y_pred_te = calibrated_proba(y_pred_te, calib_temperature)
print(y_pred_te)
y_pred_te = y_arr_to_df(y_pred_te, y_te.columns)
return metric_fn(y_te, y_pred_te)
# calibrate(error_model, X_p.copy(), y_to_error_classes(y.copy()), y0.copy())
###Output
_____no_output_____
###Markdown
Blender model
###Code
!mkdir -p bundles
class ModelBlender(object):
def __init__(self, deep_params, blend_params, seed=42, verbose=0, preprocess_params=None):
self.deep_params = deep_params
self.num_epochs = deep_params['epochs']
self.blend_params = blend_params
self.seed = seed
self.preprocess_params = preprocess_params
zero_params = blend_params.get('zero_params') or deep_params
errors_params = blend_params.get('errors_params') or deep_params
self.model_main = DeepMultiLabelModel(deep_params, model_name='main', seed=seed, verbose=verbose)
self.model_zero = DeepMultiLabelModel(zero_params or deep_params, model_name='zero', seed=seed, verbose=verbose)
self.model_errors = DeepMultiLabelModel(errors_params or deep_params, model_name='errors', seed=seed, verbose=verbose)
self.calib_temperature = None
self.classes = []
self.metrics = {}
@property
def definition(self):
return self.model_main.definition
def predict(self, X):
y = self.model_main.predict(X)
# Если выставлено, то принудительно ставим 0 везде, где модель предсказания 0 уверена
zero_pred_threshold = self.blend_params.get('zero_threshold', 0)
if zero_pred_threshold > 0:
zero_preds = self.model_zero.predict(X)[:, 0]
override_ind = zero_preds > zero_pred_threshold
print('Override to zeros: {} rows'.format(len(override_ind[override_ind])))
y[override_ind, :] = 0.
if self.blend_params.get('use_error_class'):
error_class_indices = [self.classes.index(e) for e in error_classes]
y[:, error_class_indices] = self.model_errors.predict(X)
return y
def cv(self, X, y, y0, run_name=None, n_split=7,
metric_fn=log_loss_metric, overfit=True,
run_tags=None, return_pred=True
):
seed_everything(self.seed)
y_te = y.drop(columns=['drug_id'], errors='ignore')
ycols = list(y_te.columns)
self.classes = ycols
preds = {}
_, y_pred = self.model_main.cv(
X, y, y0,
n_split=n_split, metric_fn=metric_fn, overfit=overfit,
run_tags=run_tags, run_name=run_name, return_pred=True, max_score=0.019
)
if y_pred is None:
if return_pred:
return None, None
return None
self.metrics['initial'] = metric_fn(y_te, y_pred)
preds['initial'] = y_pred
print('INITIAL', self.metrics['initial'])
zero_threshold = self.blend_params.get('zero_threshold', 0)
if zero_threshold > 0:
_, y_pred_zero = self.model_zero.cv(
X, y_to_zero_class(y_te), y_to_zero_class(y0),
n_split=n_split, metric_fn=metric_fn, overfit=overfit,
run_tags=run_tags, run_name=run_name, return_pred=True
)
zero_preds = y_pred_zero.iloc[:, 0]
override_ind = zero_preds > zero_threshold
print('Override to zeros: {} rows'.format(len(override_ind[override_ind])))
y_pred.loc[override_ind, :] = 0.
self.metrics['after_zero'] = metric_fn(y_te, y_pred)
preds['after_zero'] = y_pred
print('AFTER ZERO', self.metrics['after_zero'])
if self.blend_params.get('use_error_class'):
_, y_pred_errors = self.model_errors.cv(
X, y_to_error_classes(y_te), y0,
n_split=n_split, metric_fn=metric_fn, overfit=overfit,
run_tags=run_tags, run_name=run_name, return_pred=True
)
y_pred.loc[:, error_classes] = y_pred_errors.loc[:, error_classes]
self.metrics['after_error'] = metric_fn(y_te, y_pred)
preds['after_error'] = y_pred
print('AFTER ERROR', self.metrics['after_error'])
logloss_valid = metric_fn(y_te, y_pred)
self.metrics['final'] = logloss_valid
preds['final'] = y_pred
if has_internet and run_name:
bundle = {
'seed': self.seed,
'n_split': n_split,
'p_min': p_min,
'p_max': p_max,
'preprocess_params': self.preprocess_params,
'deep_params': self.deep_params,
'blend_params': self.blend_params,
'logloss_valid': logloss_valid,
'metrics': self.metrics,
'model_def': self.definition,
'run_name': run_name,
'run_tags': run_tags,
'predictions': preds
}
bundle_path = os.path.join('bundles', f'{int(time())}.pickle')
with open(bundle_path, 'wb') as fbundle:
pickle.dump(bundle, fbundle)
print(f'Write bundle: {bundle_path}')
mlflow.set_experiment('Kaggle-MOA-Blend')
with mlflow.start_run(run_name=run_name):
mlflow.log_params(dict_flatten({
'seed': self.seed,
'n_split': n_split,
'p_min': p_min,
'p_max': p_max,
'preprocess_params': self.preprocess_params,
'deep_params': self.deep_params,
'blend_params': self.blend_params
}))
mlflow.log_metric(key="logloss_valid", value=logloss_valid)
mlflow.log_metrics(self.metrics)
tags = {
'model_def': self.definition,
'run': run_name,
'bundle_path': bundle_path
}
tags.update(run_tags)
mlflow.set_tags(tags)
if return_pred:
return logloss_valid, y_pred
return logloss_valid
evaluate_context = {}
def create_and_evaluate_model(
args, models=None, predictions=None, predictions_cv=None,
run_name='', n_split=7, verbose=0, overfit=False
):
print(args)
preprocess_params, deep_params, blend_params, seed, y_quantiles = args
try:
X_p, X_test_p, y_p, y0_p = preprocess_X(fe_params, X.copy(), X_test.copy(), y.copy(), y0.copy(), seed=seed)
except Exception as e:
print(e)
return 0.19
model = ModelBlender(deep_params, blend_params, seed=seed, verbose=verbose, preprocess_params=preprocess_params)
evaluate_context['current_iter'] = {
'model': model
}
logloss_valid, y_pred = model.cv(
X_p, y_p, y0_p, n_split=n_split, overfit=overfit,
run_name=run_name, run_tags={'args': str(args)}, return_pred=True
)
if logloss_valid is None:
return 0.19
if models is not None:
models.append(model)
if predictions is not None:
predictions.append(model.predict(X_test_p))
if predictions_cv is not None:
predictions_cv.append(y_pred)
evaluate_context['last_iter'] = {
'model': model,
'logloss': logloss_valid
}
print(f'Final valid logloss: {logloss_valid}')
return logloss_valid
###Output
_____no_output_____
###Markdown
Manual tuning
###Code
from copy import deepcopy
models_final = []
p_min = 0.0015
p_max = 0.9985
batch_size = 128
seed = 8
fold = fold_simple
baseline_num = 5
baseline_score = round(0.01562, 6)
fe_params = {
'fe_cluster': {'enabled': False, 'n_clusters_c': 6, 'n_clusters_g': 44},
'fe_stats': {'enabled': False},
'pca': {'enabled': True, 'n_comp_cells': 50, 'n_comp_genes': 600},
'scaler': 'quantile', 'shuffle_cols': True,
'variance_reduction': {'enabled': True, 'threshold': 0.8}
}
nn_params = {
'batch_norm': True, 'weight_norm': True,
'activation': ['elu', 'elu', 'elu', 'elu'],
'dropout': [0.3, 0.3, 0.4, 0.3],
'hid_layer': [512,1024,512,2048],
'init_non_scored_weights': False,
'label_smoothing': 0.0015,
'learning_rate': 0.001, 'epochs': 25
}
baseline_conf = (
fe_params,
nn_params,
{'zero_threshold': 0, "use_error_class": False}, 8, 1
)
new_conf = deepcopy(baseline_conf)
# new_conf[2]['use_error_class'] = True
new_score = create_and_evaluate_model(
new_conf, models=models_final,
run_name=f'base_{baseline_num}', n_split=5, verbose=0
)
print(
round(new_score - baseline_score, 6),
f'{100 * round(new_score/baseline_score, 3) - 100:.2f}%',
round(new_score, 6)
)
###Output
_____no_output_____
###Markdown
Public models
###Code
p_min = 1e-5
p_max = 1.-1e-5
batch_size = 128
fold = fold_simple
###Output
_____no_output_____
###Markdown
keras NN +PCA with Label smoothing CV[0.01562] LB [0.01859]
###Code
# https://www.kaggle.com/riadalmadani/keras-nn-pca-with-label-smoothing
seed = 8
fe_params = {
'fe_cluster': {'enabled': False, 'n_clusters_c': 6, 'n_clusters_g': 44},
'fe_stats': {'enabled': False},
'pca': {'enabled': True, 'n_comp_cells': 50, 'n_comp_genes': 600},
'scaler': 'quantile', 'shuffle_cols': True,
'variance_reduction': {'enabled': True, 'threshold': 0.8}
}
X_p, X_test_p, y_p, y0_p = preprocess_X(fe_params, X.copy(), X_test.copy(), y.copy(), y0.copy(), seed=seed)
print(X_p.shape, 'should be: (, 1039)')
nn_params = {
'batch_norm': True, 'weight_norm': True,
'activation': ['relu', 'relu', 'relu'],
'dropout': [0.2, 0.5, 0.2],
'hid_layer': [2048,1048,512],
'init_non_scored_weights': False,
'label_smoothing': 0.0015,
'learning_rate': 0.001, 'epochs': 500
}
nn_params = {
'batch_norm': True, 'weight_norm': True,
'activation': ['elu', 'elu', 'elu', 'elu'],
'dropout': [0.3, 0.3, 0.4, 0.3],
'hid_layer': [512,1024,512,2048],
'init_non_scored_weights': False,
'label_smoothing': 0.0015,
'learning_rate': 0.001, 'epochs': 35
}
main_model = DeepMultiLabelModel(nn_params, 'main', seed=seed, verbose=0)
main_model.cv(X_p, y_p, y0_p, n_split=7, run_name='public_nn_1', overfit=True)
# Epoch 5/35 0.0180
# Epoch 10/35 0.0160
###Output
(21948, 1043) should be: (, 1039)
[02:13], Fold 0: 0.016077665932851228
[02:40], Fold 1: 0.016139014846157457
[02:21], Fold 2: 0.016049516507951788
[02:16], Fold 3: 0.015945916239989495
[02:21], Fold 4: 0.016240827478692616
[02:29], Fold 5: 0.015985089542815856
[02:30], Fold 6: 0.015586100719255144
[02:30], Fold 7: 0.015697994209915803
[02:16], Fold 8: 0.015996964666843715
[02:08], Fold 9: 0.016317549712339566
[02:39], Fold 10: 0.015972715173830587
[02:16], Fold 11: 0.01589995101765791
Valid logloss: 0.015992442170691762
###Markdown
Pytorch-RankGauss-PCA-NN CV [0.014572] LB [0.01839]
###Code
# https://www.kaggle.com/vbmokin/moa-pytorch-rankgauss-pca-nn-upgrade-3d-visual
seed = 0
fe_params = {
'fe_cluster': {'enabled': False, 'n_clusters_c': 6, 'n_clusters_g': 44},
'fe_stats': {'enabled': False},
'pca': {'enabled': True, 'n_comp_cells': 60, 'n_comp_genes': 463},
'scaler': 'quantile', 'shuffle_cols': True,
'variance_reduction': {'enabled': True, 'threshold': 0.9}
}
X_p, X_test_p, y_p, y0_p = preprocess_X(fe_params, X.copy(), X_test.copy(), y.copy(), y0.copy(), seed=seed)
print(X_p.shape, 'should be: (, 1015)')
nn_params = {
'batch_norm': True, 'weight_norm': True,
'activation': ['leaky_relu', 'leaky_relu'],
'dropout': [0.25, 0.25],
'hid_layer': [1500, 1500],
'init_non_scored_weights': False,
'label_smoothing': 0,
'learning_rate': 0.001, 'epochs': 25
}
main_model = DeepMultiLabelModel(nn_params, 'main', verbose=0, seed=seed)
main_model.cv(X_p, y_p, y0_p, n_split=7, run_name='public_nn_2', overfit=True)
# FOLD: 0, EPOCH: 5, valid_loss: 0.017283157035708426
# FOLD: 0, EPOCH: 10, valid_loss: 0.01737722285091877
# FOLD: 0, EPOCH: 24, valid_loss: 0.016081880070269106
###Output
(21948, 1015) should be: (, 1015)
[01:54], Fold 0: 0.01641352227166248
[01:44], Fold 1: 0.016291257878713786
###Markdown
MODEL1 CV [0.01562060391771847] LB [0.01833]
###Code
# https://www.kaggle.com/vikazrajpurohit/3-model-training-and-inference#1.-MODEL1-CV-[0.01562060391771847]-LB-[0.01833]
seed = 42
# TODO:
# Fine tune и полный transfer learning
fe_params = {
'fe_cluster': {'enabled': True, 'n_clusters_c': 4, 'n_clusters_g': 22},
'fe_stats': {'enabled': True},
'pca': {'enabled': True, 'n_comp_cells': 50, 'n_comp_genes': 600, 'n_clusters': 5},
'scaler': 'quantile', 'shuffle_cols': False,
'variance_reduction': {'enabled': True, 'threshold': 0.85}
}
y0_s = pd.concat([y, y0], axis=1)
X_p, X_test_p = preprocess_X(fe_params, X.copy(), X_test.copy(), seed=seed)
print(X_p.shape, 'should be: (, 1240)')
nn_params = {
'batch_norm': True, 'weight_norm': 'output',
'activation': ['leaky_relu', 'leaky_relu', 'leaky_relu', 'leaky_relu'],
'dropout': [0.5, 0.35, 0.3, 0.25],
'hid_layer': [1500, 1250, 1000, 750],
'init_non_scored_weights': 'ALL_TARGETS',
'label_smoothing': 0,
'learning_rate': 1e-3, 'epochs': 25
}
main_model = DeepMultiLabelModel(nn_params, 'main', verbose=0, seed=0)
main_model.cv(X_p, y, y0_s, n_split=7, run_name='tune_public_3', overfit=True)
# ALL_TARGETS:
# SEED: 0, FOLD: 0, EPOCH: 5, train_loss: 0.013472, valid_loss: 0.009183
# SEED: 0, FOLD: 0, EPOCH: 23, train_loss: 0.012016, valid_loss: 0.008624
# SCORED_ONLY:
# SEED: 0, FOLD: 0, EPOCH: 5, train_loss: 0.019837, valid_loss: 0.016361
###Output
_____no_output_____
###Markdown
Auto Tuning
###Code
from hyperopt import fmin, tpe, hp, Trials, space_eval
from hyperopt.pyll.stochastic import sample as ho_sample
model_shapes = [
# https://www.kaggle.com/sinamhd9/mechanisms-of-action-moa-tutorial/data
[[1792, 1024, 2048], [0.65, 0.7, 0.6], ['relu', 'relu', 'relu']],
[[1792, 1024, 2048], [0.65, 0.7, 0.6], ['selu','swish','swish']],
[[1152, 1152, 2048], [0.7,0.7,0.3], ['selu', 'swish', 'swish']],
[[1408, 1152, 1920], [0.7,0.65,0.45], ['selu','swish','swish']],
[[1280,1152,1920], [0.7,0.55,0.3], ['selu','swish','swish']],
[[896, 768, 2048], [0.7,0.7,0.3], ['selu','swish','elu']],
[[1280, 768, 2048], [0.7,0.7,0.3], ['selu','swish','elu']],
[[1152, 2048, 1280], [0.7,0.7,0.6], ['selu','swish','swish']],
[[768,512,2048], [0.65,0.7,0.3], ['selu','swish','selu']],
[[1664,1408,1280], [0.65,0.7,0.6], ['selu','swish','swish']],
[[1408,1280,2048], [0.65, 0.65, 0.3], ['selu','swish','elu']],
# https://www.kaggle.com/omniking1999/notebook-v3-0
[[2048,1048,512], [0.2, 0.5, 0.2], ['relu', 'relu', 'relu']],
[[512, 1024, 512, 2048], [0.3, 0.3, 0.4, 0.3], ['elu','elu','elu','elu']],
[[1500, 1250, 1000, 750], [0.5, 0.35, 0.3, 0.25], ['leaky_relu', 'leaky_relu', 'leaky_relu', 'leaky_relu']]
]
seeds = [34, 9, 42, 11, 22, 8, 13, 25, 21, 29]
space_preprocess = [{
'scaler': hp.choice('scaler', ['gauss', 'quantile', 'none', 'standard']),
'fe_stats': hp.choice('fe_stats', [{
"enabled": hp.choice('fe_stats_enabled', [True, False]),
}]),
'pca': hp.choice('pca', [{
"enabled": hp.choice('pca_enabled1', [True]),
"n_comp_genes": hp.uniformint('n_comp_genes', 300, 700),
"n_comp_cells": hp.uniformint('n_comp_cells', 30, 90),
}, {
"enabled": hp.choice('pca_enabled2', [False]),
}]),
'fe_cluster': hp.choice('fe_cluster', [{
"enabled": hp.choice('fe_cluster_enabled1', [False]),
},
{
"enabled": hp.choice('fe_cluster_enabled2', [True]),
"n_clusters_g": hp.uniformint('n_clusters_g', 15, 45),
"n_clusters_c": hp.uniformint('n_clusters_c', 5, 15),
}]),
'variance_reduction': hp.choice('variance_reduction', [{
"enabled": hp.choice('variance_reduction_enabled1', [True]),
"threshold": hp.uniform('threshold', 0.7, 0.9)
}, {
"enabled": hp.choice('variance_reduction_enabled2', [False]),
}]),
'shuffle_cols': hp.choice('shuffle_cols', [True, False]),
}]
space_deep = [{
"hid_layer": hp.choice(f'hid_layer_{i}', [model_shape[0]]),
'activation': hp.choice(f'activation_{i}', [model_shape[2]]),
'dropout': hp.choice(f'dropout_{i}', [model_shape[1]]),
'learning_rate': hp.uniform(f'learning_rate_{i}', 0.001, 0.5),
'label_smoothing': hp.uniform(f'label_smoothing{i}', 0.0001, 0.003),
'init_non_scored_weights': hp.choice(f'init_non_scored_weights_{i}', ['ALL_TARGETS', 'ONLY_NON_SCORED', False]),
'batch_norm': hp.choice(f'batch_norm_{i}', [True, False]),
'weight_norm': hp.choice(f'weight_norm_{i}', [True, False]),
'epochs': hp.choice(f'epochs_{i}', [50, 100, 200])
} for i, model_shape in enumerate(model_shapes)
]
###Output
_____no_output_____
###Markdown
Error class
###Code
error_class_space = [
hp.choice('preprocess', space_preprocess),
hp.choice('deep_params', space_deep),
hp.choice('seed', seeds)
]
def evaluate_error_class(args, run_name=None, n_split=7):
print(args)
fe_params, nn_params, seed = args
seed_everything(seed)
X_p, X_test_p, y_p, y0_p = preprocess_X(fe_params, X.copy(), X_test.copy(), y.copy(), y0.copy(), seed=seed)
error_model = DeepMultiLabelModel(nn_params, 'errors', verbose=0)
y_error = y_to_error_classes(y_p)
return error_model.cv(
X_p, y_error, y0_p, n_split=n_split, run_name=run_name,
run_tags={"args": str(args)}
)
trials_version = 'error_class'
run_name = f'tune_{trials_version}'
space = error_class_space
score_func = partial(evaluate_error_class, run_name=run_name, n_split=5)
ho_sample(space)
###Output
_____no_output_____
###Markdown
Zero class
###Code
zero_class_space = [
hp.choice('preprocess', space_preprocess),
hp.choice('deep_params', space_deep),
hp.choice('seed', seeds)
]
def evaluate_zero_class(args, run_name=None, run_tags=None, n_split=7):
print(args)
fe_params, nn_params, seed = args
seed_everything(seed)
X_p, X_test_p, y_p, y0_p = preprocess_X(fe_params, X.copy(), X_test.copy(), y.copy(), y0.copy(), seed=seed)
error_model = DeepMultiLabelModel(nn_params, 'zero', verbose=0)
y_error = y_to_zero_class(y_p)
return error_model.cv(
X_p, y_error, y_to_zero_class(y0_p), n_split=n_split,
run_name=run_name, run_tags={"args": str(args)}
)
trials_version = 'zero_class'
run_name = f'tune_{trials_version}'
space = zero_class_space
score_func = partial(evaluate_zero_class, run_name=run_name, n_split=5)
ho_sample(space)
###Output
_____no_output_____
###Markdown
Blender
###Code
blend_space = [
hp.choice('preprocess', space_preprocess),
hp.choice('deep_params', space_deep),
hp.choice('blend_params', [
{
'zero_threshold': hp.uniform('zero_threshold1', 0.95, 0.999),
'use_error_class': hp.choice('use_error_class1', [True, False]),
},
{
'zero_threshold': hp.choice('zero_threshold2', [0]),
'use_error_class': hp.choice('use_error_class2', [True, False]),
}]),
hp.choice('seed', seeds),
hp.choice('y_quantiles', [1, 2]),
]
trials_version = 'blender2'
run_name = f'tune_{trials_version}'
space = blend_space
score_func = partial(create_and_evaluate_model, models=None, run_name=run_name, n_split=7)
ho_sample(space)
###Output
_____no_output_____
###Markdown
Run!
###Code
!mkdir -p trials/
start_secs = time()
trials = Trials()
fold = fold_simple
trials_file = "trials/trial_{}.hp".format(trials_version)
if isfile(trials_file):
print(f'load trials from: {trials_file}')
trials = pickle.load(open(trials_file, 'rb'))
EPOCHS = None
for i in range(60):
sug = tpe.suggest
if i % 3 == 0:
sug = tpe.rand.suggest
best = fmin(
fn=score_func,
space=space,
algo=sug,
max_evals=2 * (i + 1),
trials=trials
)
pickle.dump(trials, open(trials_file, "wb"))
###Output
({'fe_cluster': {'enabled': True, 'n_clusters_c': 5, 'n_clusters_g': 19}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': False}, 'scaler': 'none', 'shuffle_cols': True, 'variance_reduction': {'enabled': True, 'threshold': 0.7234634592150213}}, {'activation': ('selu', 'swish', 'selu'), 'batch_norm': False, 'dropout': (0.65, 0.7, 0.3), 'epochs': 100, 'hid_layer': (768, 512, 2048), 'init_non_scored_weights': 'ALL_TARGETS', 'label_smoothing': 0.0006978288035937164, 'learning_rate': 0.3178535920260323, 'weight_norm': True}, {'use_error_class': True, 'zero_threshold': 0.985059066480174}, 22, 1)
[02:33], Fold 0: 0.019598324077761204
break cv execution 0.019598324077761204 > 0.019
({'fe_cluster': {'enabled': True, 'n_clusters_c': 15, 'n_clusters_g': 43}, 'fe_stats': {'enabled': True}, 'pca': {'enabled': True, 'n_comp_cells': 74, 'n_comp_genes': 557}, 'scaler': 'quantile', 'shuffle_cols': True, 'variance_reduction': {'enabled': True, 'threshold': 0.7189569644347072}}, {'activation': ('selu', 'swish', 'swish'), 'batch_norm': True, 'dropout': (0.7, 0.7, 0.6), 'epochs': 100, 'hid_layer': (1152, 2048, 1280), 'init_non_scored_weights': 'ONLY_NON_SCORED', 'label_smoothing': 0.0017502808718730962, 'learning_rate': 0.3230563364478111, 'weight_norm': False}, {'use_error_class': False, 'zero_threshold': 0}, 25, 2)
[04:43], Fold 0: 0.02259846712657795
break cv execution 0.02259846712657795 > 0.019
100%|██████████| 2/2 [08:31<00:00, 255.82s/trial, best loss: 0.19]
({'fe_cluster': {'enabled': True, 'n_clusters_c': 14, 'n_clusters_g': 42}, 'fe_stats': {'enabled': True}, 'pca': {'enabled': False}, 'scaler': 'standard', 'shuffle_cols': False, 'variance_reduction': {'enabled': True, 'threshold': 0.7886489739495448}}, {'activation': ('selu', 'swish', 'selu'), 'batch_norm': False, 'dropout': (0.65, 0.7, 0.3), 'epochs': 200, 'hid_layer': (768, 512, 2048), 'init_non_scored_weights': False, 'label_smoothing': 0.0029076049700110323, 'learning_rate': 0.19216429141100252, 'weight_norm': True}, {'use_error_class': False, 'zero_threshold': 0.9942998976729281}, 29, 2)
[01:29], Fold 0: 0.018801667985491877
[01:07], Fold 1: 0.019978372146150804
break cv execution 0.019978372146150804 > 0.019
({'fe_cluster': {'enabled': False}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': True, 'n_comp_cells': 79, 'n_comp_genes': 411}, 'scaler': 'gauss', 'shuffle_cols': True, 'variance_reduction': {'enabled': False}}, {'activation': ('selu', 'swish', 'elu'), 'batch_norm': False, 'dropout': (0.7, 0.7, 0.3), 'epochs': 50, 'hid_layer': (896, 768, 2048), 'init_non_scored_weights': 'ONLY_NON_SCORED', 'label_smoothing': 0.0014884856868194656, 'learning_rate': 0.003136293809714334, 'weight_norm': False}, {'use_error_class': True, 'zero_threshold': 0.9612490662702131}, 8, 1)
[02:21], Fold 0: 0.020006752962845764
break cv execution 0.020006752962845764 > 0.019
100%|██████████| 4/4 [06:03<00:00, 90.96s/trial, best loss: 0.19]
({'fe_cluster': {'enabled': False}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': True, 'n_comp_cells': 81, 'n_comp_genes': 606}, 'scaler': 'quantile', 'shuffle_cols': True, 'variance_reduction': {'enabled': False}}, {'activation': ('selu', 'swish', 'elu'), 'batch_norm': True, 'dropout': (0.7, 0.7, 0.3), 'epochs': 50, 'hid_layer': (896, 768, 2048), 'init_non_scored_weights': 'ONLY_NON_SCORED', 'label_smoothing': 0.001430271450218026, 'learning_rate': 0.306062767069022, 'weight_norm': True}, {'use_error_class': True, 'zero_threshold': 0}, 11, 1)
[03:21], Fold 0: 0.02013300735462696
break cv execution 0.02013300735462696 > 0.019
({'fe_cluster': {'enabled': False}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': True, 'n_comp_cells': 85, 'n_comp_genes': 418}, 'scaler': 'quantile', 'shuffle_cols': False, 'variance_reduction': {'enabled': True, 'threshold': 0.8831168888228615}}, {'activation': ('relu', 'relu', 'relu'), 'batch_norm': True, 'dropout': (0.2, 0.5, 0.2), 'epochs': 100, 'hid_layer': (2048, 1048, 512), 'init_non_scored_weights': False, 'label_smoothing': 0.002797433536021185, 'learning_rate': 0.2897024957730486, 'weight_norm': True}, {'use_error_class': False, 'zero_threshold': 0.9662682868254961}, 22, 2)
[02:20], Fold 0: 0.019845850491750104
break cv execution 0.019845850491750104 > 0.019
100%|██████████| 6/6 [06:17<00:00, 62.92s/trial, best loss: 0.19]
({'fe_cluster': {'enabled': False}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': False}, 'scaler': 'quantile', 'shuffle_cols': False, 'variance_reduction': {'enabled': False}}, {'activation': ('selu', 'swish', 'swish'), 'batch_norm': True, 'dropout': (0.7, 0.7, 0.3), 'epochs': 200, 'hid_layer': (1152, 1152, 2048), 'init_non_scored_weights': 'ONLY_NON_SCORED', 'label_smoothing': 0.0009771883416804452, 'learning_rate': 0.11761814563726697, 'weight_norm': True}, {'use_error_class': False, 'zero_threshold': 0}, 11, 2)
[04:20], Fold 0: 0.018952685312066182
[04:48], Fold 1: 0.018668267869753546
[04:50], Fold 2: 0.018318618348182707
[04:41], Fold 3: 0.019007333159604452
break cv execution 0.019007333159604452 > 0.019
({'fe_cluster': {'enabled': False}, 'fe_stats': {'enabled': True}, 'pca': {'enabled': True, 'n_comp_cells': 38, 'n_comp_genes': 683}, 'scaler': 'standard', 'shuffle_cols': True, 'variance_reduction': {'enabled': False}}, {'activation': ('relu', 'relu', 'relu'), 'batch_norm': False, 'dropout': (0.65, 0.7, 0.6), 'epochs': 50, 'hid_layer': (1792, 1024, 2048), 'init_non_scored_weights': 'ALL_TARGETS', 'label_smoothing': 0.0013432911732844785, 'learning_rate': 0.32248546838896364, 'weight_norm': True}, {'use_error_class': False, 'zero_threshold': 0.9875404088098492}, 29, 2)
[04:01], Fold 0: 0.021293196710370717
break cv execution 0.021293196710370717 > 0.019
100%|██████████| 8/8 [23:13<00:00, 174.18s/trial, best loss: 0.19]
({'fe_cluster': {'enabled': False}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': True, 'n_comp_cells': 57, 'n_comp_genes': 321}, 'scaler': 'gauss', 'shuffle_cols': True, 'variance_reduction': {'enabled': False}}, {'activation': ('selu', 'swish', 'swish'), 'batch_norm': False, 'dropout': (0.65, 0.7, 0.6), 'epochs': 100, 'hid_layer': (1792, 1024, 2048), 'init_non_scored_weights': 'ALL_TARGETS', 'label_smoothing': 0.001149810150809531, 'learning_rate': 0.024465478000991126, 'weight_norm': True}, {'use_error_class': False, 'zero_threshold': 0.9961802014144556}, 25, 1)
[06:06], Fold 0: 0.016852943299564434
[05:36], Fold 1: 0.017091099638381697
[05:09], Fold 2: 0.016782110693644582
[05:06], Fold 3: 0.01686275479761972
[05:51], Fold 4: 0.01684331184450293
[05:59], Fold 5: 0.01698954409645141
[05:28], Fold 6: 0.016790426595935177
Valid logloss: 0.016887459584461237
INITIAL
0.016887459584461237
[04:10], Fold 0: 0.5930874584451227
[04:24], Fold 1: 0.5881314133614257
[04:30], Fold 2: 0.5835006920064596
[05:23], Fold 3: 0.5967700162426397
[04:44], Fold 4: 0.5816170011383814
[07:37], Fold 5: 0.5914943875145051
[04:11], Fold 6: 0.5882537655982037
Valid logloss: 0.5889791466928849
Override to zeros: 2 rows
AFTER ZERO
0.016887239420322404
###Markdown
Final prediction
###Code
# Сюда записываем все доступные предсказания для блендинга
from copy import deepcopy
fold = fold_simple
models_final = []
predictions_final = []
baseline_num = 2
EPOCHS = None
SPLITS = 12
final_conf = [
# http://datadigger.ru:5000/#/experiments/3/runs/4525174c3e574ac68426e02ab0f0ee3f
# ({'fe_cluster': {'enabled': True, 'n_clusters_c': 14, 'n_clusters_g': 39}, 'fe_stats': {'enabled': True}, 'pca': {'enabled': False}, 'scaler': 'quantile', 'shuffle_cols': True, 'variance_reduction': {'enabled': False}}, {'activation': ('elu', 'elu', 'elu', 'elu'), 'batch_norm': True, 'dropout': (0.3, 0.3, 0.4, 0.3), 'epochs': 100, 'hid_layer': (512, 1024, 512, 2048), 'init_non_scored_weights': False, 'label_smoothing': 0.0001380444271082826, 'learning_rate': 0.4083831289327425, 'weight_norm': True}, {'use_error_class': True, 'zero_threshold': 0.985}, 13, 2),
# http://datadigger.ru:5000/#/experiments/3/runs/0b83844bc5bd44b2a9665fe6bd9feee4
# ({'fe_cluster': {'enabled': True, 'n_clusters_c': 11, 'n_clusters_g': 41}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': False}, 'scaler': 'quantile', 'shuffle_cols': True, 'variance_reduction': {'enabled': False}}, {'activation': ('elu', 'elu', 'elu', 'elu'), 'batch_norm': True, 'dropout': (0.3, 0.3, 0.4, 0.3), 'epochs': 100, 'hid_layer': (512, 1024, 512, 2048), 'init_non_scored_weights': False, 'label_smoothing': 0.00010848528437984486, 'learning_rate': 0.40431872111746137, 'weight_norm': True}, {'use_error_class': True, 'zero_threshold': 0.985}, 13, 1),
# http://datadigger.ru:5000/#/experiments/0/runs/82cc5898a2b54ce5832d93f6a1afd445
({'fe_cluster': {'enabled': True, 'n_clusters_c': 6, 'n_clusters_g': 44}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': False}, 'scaler': 'none', 'shuffle_cols': True, 'variance_reduction': {'enabled': True, 'threshold': 0.9585049261745544}}, {'activation': ('selu', 'swish', 'swish'), 'batch_norm': True, 'dropout': (0.7, 0.7, 0.3), 'epochs': 100, 'hid_layer': (1152, 1152, 2048), 'init_non_scored_weights': False, 'label_smoothing': 0.0007000000000000001, 'learning_rate': 0.016, 'weight_norm': True}, {'use_error_class': True, 'zero_threshold': 0.985}, 29, 1),
(
{
'fe_cluster': {'enabled': False, 'n_clusters_c': 6, 'n_clusters_g': 44},
'fe_stats': {'enabled': False},
'pca': {'enabled': True, 'n_comp_cells': 50, 'n_comp_genes': 600},
'scaler': 'quantile', 'shuffle_cols': True,
'variance_reduction': {'enabled': True, 'threshold': 0.8}
},
{
'batch_norm': True, 'weight_norm': True,
'activation': ['elu', 'elu', 'elu', 'elu'],
'dropout': [0.3, 0.3, 0.4, 0.3],
'hid_layer': [512,1024,512,2048],
'init_non_scored_weights': False,
'label_smoothing': 0.0015,
'learning_rate': 0.001, 'epochs': 25
},
{
'use_error_class': True, 'zero_threshold': 0.985
},
8, 1
),
# public 1
(
{
'fe_cluster': {'enabled': False, 'n_clusters_c': 6, 'n_clusters_g': 44},
'fe_stats': {'enabled': False},
'pca': {'enabled': True, 'n_comp_cells': 50, 'n_comp_genes': 600},
'scaler': 'quantile', 'shuffle_cols': True,
'variance_reduction': {'enabled': True, 'threshold': 0.8}
},
{
'batch_norm': True, 'weight_norm': True,
'activation': ['elu', 'elu', 'elu', 'elu'],
'dropout': [0.3, 0.3, 0.4, 0.3],
'hid_layer': [512,1024,512,2048],
'init_non_scored_weights': False,
'label_smoothing': 0.0015,
'learning_rate': 0.001, 'epochs': 35
},
{
'use_error_class': True, 'zero_threshold': 0.985
},
8, 1
),
# best mlflow
(
{'fe_cluster': {'enabled': False}, 'fe_stats': {'enabled': False}, 'pca': {'enabled': False}, 'scaler': 'gauss', 'shuffle_cols': False, 'variance_reduction': {'enabled': False}},
{'activation': ('selu', 'swish', 'swish'), 'batch_norm': False, 'dropout': (0.65, 0.7, 0.6), 'epochs': 200,
'hid_layer': (1664, 1408, 1280), 'init_non_scored_weights': True, 'label_smoothing': 0.0001087446974391731, 'learning_rate': 0.04150188777502751, 'weight_norm': True},
{
'use_error_class': True, 'zero_threshold': 0.985
},
34, 1
),
# public 2
({
'fe_cluster': {'enabled': False, 'n_clusters_c': 6, 'n_clusters_g': 44},
'fe_stats': {'enabled': False},
'pca': {'enabled': True, 'n_comp_cells': 60, 'n_comp_genes': 463},
'scaler': 'quantile', 'shuffle_cols': True,
'variance_reduction': {'enabled': True, 'threshold': 0.9}
},
{
'batch_norm': True, 'weight_norm': True,
'activation': ['leaky_relu', 'leaky_relu'],
'dropout': [0.25, 0.25],
'hid_layer': [1500, 1500],
'init_non_scored_weights': False,
'label_smoothing': 0,
'learning_rate': 0.001, 'epochs': 25
},
{
'use_error_class': True, 'zero_threshold': 0.985
},
0, 1
),
# public 3
({
'fe_cluster': {'enabled': True, 'n_clusters_c': 4, 'n_clusters_g': 22},
'fe_stats': {'enabled': True},
'pca': {'enabled': True, 'n_comp_cells': 50, 'n_comp_genes': 600, 'n_clusters': 5},
'scaler': 'quantile', 'shuffle_cols': False,
'variance_reduction': {'enabled': True, 'threshold': 0.85}
},
{
'batch_norm': True, 'weight_norm': 'output',
'activation': ['leaky_relu', 'leaky_relu', 'leaky_relu', 'leaky_relu'],
'dropout': [0.5, 0.35, 0.3, 0.25],
'hid_layer': [1500, 1250, 1000, 750],
'init_non_scored_weights': True,
'label_smoothing': 0,
'learning_rate': 1e-3, 'epochs': 25
},
{
'use_error_class': True, 'zero_threshold': 0.985
},
42, 1
),
]
cv_preds = []
for conf in final_conf:
conf2 = deepcopy(conf)
new_score = create_and_evaluate_model(
conf2,
models=models_final,
predictions=predictions_final,
predictions_cv=cv_preds,
n_split=SPLITS,
run_name=f'base_{baseline_num}'
)
print('\nFinal results:')
# http://datadigger.ru:5000/#/experiments/3/runs/69b1f266675941378e47d747fd5c11fc
print('Should be lower than', 0.01709)
yres = y.drop(columns=['drug_id'], errors='ignore')
ycols = yres.columns
cv_res = []
for i, cv_pred in enumerate(cv_preds):
print(f'Logloss {i}:', log_loss_metric(yres, cv_pred))
cv_res.append(cv_pred.values)
final_pred = y_arr_to_df(np.mean(cv_res, axis=0), ycols)
print(f'CV:', log_loss_metric(yres, final_pred))
# Формируем предсказания на основе множества предсказаний
# Сюда записываем submission предсказания
df_sample.loc[:, ycols] = 0
df_sample.loc[:, ycols] = np.mean(predictions_final, axis=0)
# У ctl_vehicle все классы - 0, поэтому просто зануляем
# Правильно ли?
df_sample.loc[ind_te, ycols] = 0
display(df_sample.head())
###Output
_____no_output_____
###Markdown
Submission
###Code
df_sample.to_csv('submission.csv', index=False)
###Output
_____no_output_____
###Markdown
Error analysis
###Code
# Количество ошибок, когда должно быть везде 0, а на самом деле - нет
# Количество ошибок, когда должно что-то быть, а на самом деле везде 0
# Для уменьшения этих ошибок можно применять lgb_zero
res = y.copy()
res.loc[:, y.columns] = 0
# model = models_final[-1]
# preprocess_params, _, _, seed, _ = baseline_conf[-1]
model = error_model
preprocess_params = fe_params
seed_everything(seed)
_, te = train_test_split(X, y, n_split=5)
X_p, _, _, _ = preprocess_X(fe_params, X.copy(), X_test.copy(), y.copy(), y0.copy(), seed=seed)
x_val = X_p.astype('float64').values[te]
res.loc[te, y.columns] = model.predict(x_val)
y_pred = res.loc[te, y.columns]
y_true = y.loc[te, y.columns]
non_zeros = y_true[(y_true.T != 0).any()].index
all_zeros = y_true[~(y_true.T != 0).any()].index
clip_p_min = 1e-5
clip_p_max = 1 - 1e-5
y_pred_clip = np.clip(y_pred, clip_p_min, clip_p_max)
print('Logloss:', log_loss_metric(y_true, y_pred))
print('Logloss all zeros:', log_loss_metric(y_true.loc[all_zeros, :], y_pred.loc[all_zeros, :]))
print('Logloss non zeros:', log_loss_metric(y_true.loc[non_zeros, :], y_pred.loc[non_zeros, :]))
print('Logloss clip:', log_loss_metric(y_true, y_pred_clip))
print('Logloss all zeros clip:', log_loss_metric(y_true.loc[all_zeros, :], y_pred_clip.loc[all_zeros, :]))
print('Logloss non zeros clip:', log_loss_metric(y_true.loc[non_zeros, :], y_pred_clip.loc[non_zeros, :]))
losses = []
for i in range(y_true.shape[1]):
y_true_cl = y_true.iloc[:,i]
y_pred_cl = y_pred.iloc[:,i]
losses.append({
"index": i,
"class": y_true_cl.name,
'true_0': len(y_true_cl[y_true_cl == 0]),
'true_1': len(y_true_cl[y_true_cl == 1]),
"loss": log_loss(y_true_cl.values, y_pred_cl.values, labels=[0, 1]),
'pred_hist_0': y_pred_cl[y_pred_cl <= 0.5].round(1).value_counts().sort_index().reset_index().values,
'pred_hist_1': y_pred_cl[y_pred_cl > 0.5].round(1).value_counts().sort_index().reset_index().values,
})
loss_df = pd.DataFrame(losses).set_index(['index', 'class']).sort_values('loss', ascending=False)
display(loss_df)
###Output
Logloss: 0.013384161126035564
Logloss all zeros: 0.0026687569508067285
Logloss non zeros: 0.02042709727321972
Logloss clip: 0.013384161126035564
Logloss all zeros clip: 0.0026687569508067285
Logloss non zeros clip: 0.02042709727321972
|
danceClassifierModelMulticat.ipynb | ###Markdown
Multi-category Dance ClassifierIn this notebook I explain the development of a deep learning model that identify the kind of dance based on an image. As a result the new model can identify one or more dance styles in the same picture.I use around 450 images. It is possible to get good results with few images applying diferent techniques as Data Augmentation and Transfer Learning. For Data Augmentation i applied a technique call presizing from fastai, which apply these steps:- Resize images to relatively "large" dimensions—that is, dimensions significantly larger than the target training dimensions.- Compose all of the common augmentation operations (rotation, zoom, resize, etc) into one, and perform the combined operation on the GPU only once at the end of processing, rather than performing the operations individually and interpolating multiple times.For Transfer Learning i used the model "resnet18" and "resnet50", both models were pre-trained on ImageNet (ImageNet contains over 1.3 million images of various sizes around 500 pixels across, in 1,000 categories, the model took a few days to train) First of all I load the necessary libraries, in this case fast.ai which relies on pytorch.
###Code
import fastbook
fastbook.setup_book()
from fastbook import *
from fastai.vision.widgets import *
import secrets
torch.cuda.set_device('cuda:1')
###Output
_____no_output_____
###Markdown
Gathering DataI get the images using the Microsoft Azure Bing Search Api.
###Code
key = os.environ.get('AZURE_SEARCH_KEY', secrets.AZURE_KEY)
dance_types = 'hiphop','lindyhop','salsa'
path = Path('dance')
if not path.exists():
path.mkdir()
for o in dance_types:
dest = (path/o)
dest.mkdir(exist_ok=True)
results = search_images_bing(key, f'{o} dance', 128, 300)
download_images(dest, urls=results.attrgot('contentUrl'))
fns = get_image_files(path)
fns
###Output
Download of http://www.deasislanddance.com/wp-content/uploads/2011/02/Hip-Hop-Pic.jpg has failed after 5 retries
Fix the download manually:
$ mkdir -p dance/hiphop
$ cd dance/hiphop
$ wget -c http://www.deasislanddance.com/wp-content/uploads/2011/02/Hip-Hop-Pic.jpg
$ tar xf Hip-Hop-Pic.jpg
And re-run your code once the download is successful
###Markdown
I got around 450 images. By default I try to get 150 images maximum for each category every time I query the Bing API through "search_images_bing" function. According to the messages some of them fail to download.I use the "get_image_files" function to check the files we just download. Displaying variable "fns" show the total of files and the path for each of them.The dataset will be made up of the images as the independent variable "x" and the categories they belong as the dependent variable "y".In fastai we have a function to verify the images we downloaded are ok.
###Code
failed = verify_images(fns)
failed
###Output
_____no_output_____
###Markdown
Some files are empty or corrupt. To remove them, you can use unlink on each of them.
###Code
failed.map(Path.unlink);
###Output
_____no_output_____
###Markdown
The category of each image is defined by the name of the parent directory. I define the "parent_label_multi" function to get the category.
###Code
def parent_label_multi(o):
return [Path(o).parent.name]
###Output
_____no_output_____
###Markdown
DataloadersIt's time to load our data. To do it fastai has a flexible system called the data block API. We need to specify:- What kinds of data we are working with- How to get the list of items- How to label these items- How to create the validation setIn our case we use "ImageBlock" and "MultiCategoryBlock", since our independent variables are images and the dependent ones are one or more categories.We define how to get the images through "get_image_files" function.We will get the label through "parent_label_multi" function.We split the data 80% for training, 20% for validation
###Code
dancers = DataBlock(
blocks=(ImageBlock, MultiCategoryBlock),
get_items=get_image_files,
get_y=parent_label_multi,
splitter=RandomSplitter(valid_pct=0.2, seed=42),
item_tfms=Resize(128))
dls = dancers.dataloaders(path)
###Output
_____no_output_____
###Markdown
Let's check how our independent variable x looks like
###Code
dls.valid.show_batch(max_n=4, nrows=1)
###Output
_____no_output_____
###Markdown
How it looks the dependent variable y
###Code
dsets = dancers.datasets(path)
dsets.train[0]
###Output
_____no_output_____
###Markdown
The dependent variable is represented in this way: TensorMultiCategory([1., 0., 0.]We had a single integer representing each category, based on its location in our vocab. This is known as one-hot encoding.
###Code
dsets.train[0][0].to_thumb(256,256)
dls.train.vocab
###Output
_____no_output_____
###Markdown
Data AugmentationThe "RandomResizedCrop" function select part of the image randomly, and crop to just that part. In this way our model can learn to focus on, and recognize different features in our images. It reflects how images work in the real world: different photos of the same thing may be framed in slightly different ways.So actually training the neural network with examples of images where the objects are in slightly different places and slightly different sizes helps it to understand the basic concept of what an object is, and how it can be represented in an image.Look at the example:
###Code
dancers = dancers.new(item_tfms=RandomResizedCrop(128, min_scale=0.3))
dls = dancers.dataloaders(path)
dls.train.show_batch(max_n=4, nrows=1, unique=True)
###Output
_____no_output_____
###Markdown
The "aug_transforms" function apply the common data augmentation techniques for images like: rotation, flipping, perspective warping, brightness changes and contrast changes.Look at the example:
###Code
dancers = dancers.new(item_tfms=Resize(128), batch_tfms=aug_transforms())
dls = dancers.dataloaders(path)
dls.train.show_batch(max_n=4, nrows=1, unique=True)
###Output
_____no_output_____
###Markdown
I apply these two functions to extend the dataset.
###Code
dancers = dancers.new(
item_tfms=RandomResizedCrop(224, min_scale=0.5),
batch_tfms=aug_transforms())
dls = dancers.dataloaders(path)
###Output
_____no_output_____
###Markdown
Training the modelNow we'll create our Learner. Learner object contains four main things: the model, a DataLoaders object, an Optimizer, and the loss function to use. Transfer LearningWe can take advantage of previously trained models to identify images. To do that we remove the last layer, adding a new one with corresponding number of outputs for our model. We freeze the values of all layers except the last one, which is the only one that needs training. We train for some epochs, later we train all layers but with different learning rate, to preserve the knowledge that the layers at the bottom already have. We define these parameters with "base_lr" and "freeze_epochs" params. This concept is call "Discriminative Learning Rates".In this case we are gonna use resnet18 at the beggining, this model will help us to identify other parameters, like the right value for threshold to decide which categories are in an image. Later we can use a deeper model like resnet50 to improve our results.We already have our DataLoaders. We are gonna use the optimizer by default which is Adam.We need a Binary Cross Entropy as a loss function, however we don't actually need to tell fastai to use this loss function since it will be automatically chosen for us. fastai knows that the DataLoaders has multiple category labels, so it will use nn.BCEWithLogitsLoss by default.
###Code
learn = cnn_learner(dls, resnet18, metrics=partial(accuracy_multi, thresh=0.45))
learn.fine_tune(4, base_lr=3e-3, freeze_epochs=4)
###Output
_____no_output_____
###Markdown
Let's check which are the images with the top loss
###Code
interp = ClassificationInterpretation.from_learner(learn)
interp.plot_top_losses(5, nrows=1)
###Output
_____no_output_____
###Markdown
Considering that some images are cartoons, flyers or objets like shoes or letters, people's postures and the size of our dataset the model is doing a good job. Improving our model 80% plus accuracy is not bad!, let's check how we can improve our model.We are gonna find which is the best value for the threshold we use to decide if a category is included in an image. We get the predictions once and test for different values of threshold which give us the best results.
###Code
preds,targs = learn.get_preds()
xs = torch.linspace(0.05,0.95,29)
accs = [accuracy_multi(preds, targs, thresh=i, sigmoid=False) for i in xs]
plt.plot(xs,accs);
learn = cnn_learner(dls, resnet18, metrics=partial(accuracy_multi, thresh=0.8))
learn.fine_tune(4, base_lr=3e-3, freeze_epochs=4)
###Output
_____no_output_____
###Markdown
We improved our model tunning the threshold! Deeper modelsAnother resource we have to improve our results is the use of a deeper model, in this case we will try with resnet50, the number in the name refers to number of layers. In general, a bigger model has the ability to better capture the real underlying relationships in your data, and also to capture and memorize the specific details of your individual images.However, using a deeper model is going to require more GPU RAM and they take quite a bit longer to train.There is a tradeoff in the choice of the model between computing resources, time available to develop the model and the desired results.The choice of the model is a tradeoff between computing resources, time available to develop the model and the desired results.
###Code
learn = cnn_learner(dls, resnet101, metrics=partial(accuracy_multi, thresh=0.8))
learn.fine_tune(4, base_lr=3e-3, freeze_epochs=4)
###Output
_____no_output_____
###Markdown
fastai can show us a graph of the training and validation loss:
###Code
learn.recorder.plot_loss()
###Output
_____no_output_____
###Markdown
The training loss keeps getting better and better, however the validation loss improvement slows, and sometimes even gets worse, at this point the model starts to over fit. But this does not mean that it is getting less accurate. Actually accuracy continues improving. In the end what matters is your accuracy, or more generally your chosen metrics, not the loss. The loss is just the function we've given the computer to help us to optimize. Number of epochsOften you will find that you are limited by time, rather than generalization and accuracy, when choosing how many epochs to train for. So your first approach to training should be to simply pick a number of epochs that will train in the amount of time that you are happy to wait for. Then look at the training and validation loss plots, as shown above, and in particular your metrics, and if you see that they are still getting better even in your final epochs, then you know that you have not trained for too long.
###Code
learn.fine_tune(2, base_lr=3e-3, freeze_epochs=4)
###Output
_____no_output_____
###Markdown
Once you are happy with your results save the model.
###Code
learn.export()
###Output
_____no_output_____
###Markdown
let's check how it works with an image that was not in the training or validation dataset
###Code
btn_upload = widgets.FileUpload()
out_pl = widgets.Output()
lbl_pred = widgets.Label()
btn_run = widgets.Button(description='Classify')
def on_click_classify(change):
img = PILImage.create(btn_upload.data[-1])
out_pl.clear_output()
with out_pl: display(img.to_thumb(512,512))
pred,pred_idx,probs = learn.predict(img)
lbl_pred.value = f'Prediction: {pred}'
btn_run.on_click(on_click_classify)
#hide_output
VBox([widgets.Label('Select your dancer!'),
btn_upload, btn_run, out_pl, lbl_pred])
You can see the app that uses this model here: https://dancereco.herokuapp.com/
And the deployment process here: https://github.com/vhpvmx/dancereco
###Output
_____no_output_____ |
_webscraping/Web_scraping_etree.ipynb | ###Markdown
Webscraping intro Scraping rules- You should check a site's terms and conditions before you scrape them. It's their data and they likely have some rules to govern it.- Be nice - A computer will send web requests much quicker than a user can. Make sure you space out your requests a bit so that you don't hammer the site's server.- Scrapers break - Sites change their layout all the time. If that happens, be prepared to rewrite your code.- Web pages are inconsistent - There's sometimes some manual clean up that has to happen even after you've gotten your data. Import necessary modules
###Code
import requests
from bs4 import BeautifulSoup
import json
import os
###Output
_____no_output_____
###Markdown
requests- requests executes HTTP requests, like GET- The requests object holds the results of the request. This is page content and other items like HTTP status codes and headers.- requests only gets the page content without any parsing.- Beautiful Soup does the parsing of the HTML and finding content within the HTML.
###Code
url ='https://www.zara.com/uk/en/search?searchTerm='
keywords = input("Search: ")
url += keywords
url
###Output
_____no_output_____
###Markdown
Get result page as function
###Code
def get_soup(url, keywords=''):
response = requests.get(url + keywords)
if not response.status_code == 200:
return None
return BeautifulSoup(response.content, 'lxml')
soup = get_soup(url, keywords)
soup.html;
###Output
_____no_output_____
###Markdown
Headless Selenium
###Code
import selenium
from selenium import webdriver
from selenium.webdriver.chrome.options import Options
from selenium.webdriver.common.keys import Keys
from selenium.webdriver.common.by import By
chrome_options = Options()
chrome_options.add_argument("--headless")
chrome_options.add_argument("--incognito")
chrome_options.binary_location = '/Applications/Google Chrome Canary.app/Contents/MacOS/Google Chrome Canary'
driver = webdriver.Chrome(executable_path=os.path.abspath('../_driver_headless/chromedriver'), chrome_options=chrome_options)
driver.get(url)
driver.current_url
###Output
_____no_output_____
###Markdown
XML etree
###Code
from lxml import etree
tree = etree.HTML(driver.page_source)
result = etree.tostring(tree, pretty_print=True, method="html")
result;
[div for div in tree.xpath("//img")]
# !!! Index starts @ 1 not 0
[etree.tostring(node) for node in tree.xpath("/html/body/div[2]//a//img")]
[etree.tostring(node)[:100] for node in tree.xpath("//div[2]")]
['class:{}, id:{}'.format(node.xpath("@class"), node.xpath("@id")) for node in tree.xpath("//div")]
['class:{}, id:{}'.format(node.xpath("@class"), node.xpath("@id")) for node in tree.xpath("//section")]
[etree.tostring(node) for node in tree.xpath("//*[@id='products']/*")]
[etree.tostring(node) for node in tree.xpath("//ul[@class='product-list _productList']/*")]
[etree.tostring(node)[:100] for node in tree.xpath("//*[contains(., 'dress')]")]
['class:{}, id:{}'.format(node.xpath("@class"), node.xpath("@name")) for node in tree.xpath("//*[contains(., 'product')]")]
[etree.tostring(div)[:100] for div in tree.xpath("//section[@class='_results']")] # product-list _productList
[etree.tostring(div)[:100] for div in tree.xpath("//section._results")] # product-list _productList
[etree.tostring(div)[:100] for div in tree.xpath("//lu[@class='product-list _productList']")]
[etree.tostring(div)[:100] for div in tree.xpath("//*[@class='product _product']")]
tree.xpath("//*[@id='product-6504767']")
# tree.xpath("/html/body/div[2]/section")
product_list = [etree.tostring(li) for li in tree.xpath("/html/body/div[2]/section/div/section/ul/li")]
product_list
# tree.xpath('//*[@id="product-6504767"')
tree.xpath('//div[@class="product-info _product-info"]') # //a[@class="item _item"]/@href') # class="_ariaResults wai-aria-messages"
[li for li in product_list]
###Output
_____no_output_____
###Markdown
Logging in to a web server, e.g. wikipedia Store your credentials in a encrypted/protected file (line1 = name, line2 = pwd)
###Code
with open('../credentials.txt') as f:
contents = f.read().split('\n')
username = contents[0]
password = contents[1]
###Output
_____no_output_____
###Markdown
Construct object that contains requested login dataInspect the login-form in your browser get the value of the login token
###Code
def get_login_token(response):
soup = BeautifulSoup(response.text, 'lxml')
token = soup.find('input', {'name': "wpLoginToken"}).get('value')
return token
payload = {
'wpName': username,
'wpPassword': password,
'wploginattempt': 'Log in',
'wpEditToken': '+\\',
'title': 'Special:UserLogin',
'authAction': 'login',
'force': '',
'wpForceHttps': '1',
'wpFromhttp': '1',
'wpLoginToken': 'get_login_token(response)'
}
###Output
_____no_output_____
###Markdown
Setup a session, login, and get data
###Code
with requests.session() as s:
response = s.get('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&returnto=Main+Page')
# Set login token
payload['wpLoginToken'] = get_login_token(response)
# Send the login request
response_post = s.post('https://en.wikipedia.org/w/index.php?title=Special:UserLogin&action=submitlogin&type=login',
data=payload)
# Get another page and check if we’re still logged in
response = s.get('https://en.wikipedia.org/wiki/Special:Watchlist')
data = BeautifulSoup(response.content, 'lxml')
print(data.find('div', class_='mw-changeslist').get_text())
###Output
_____no_output_____ |
4_text_processing/week_1/assignment_1.clean.ipynb | ###Markdown
---_You are currently looking at **version 1.0** of this notebook. To download notebooks and datafiles, as well as get help on Jupyter notebooks in the Coursera platform, visit the [Jupyter Notebook FAQ](https://www.coursera.org/learn/python-text-mining/resources/d9pwm) course resource._--- Assignment 1In this assignment, you'll be working with messy medical data and using regex to extract relevant infromation from the data. Each line of the `dates.txt` file corresponds to a medical note. Each note has a date that needs to be extracted, but each date is encoded in one of many formats.The goal of this assignment is to correctly identify all of the different date variants encoded in this dataset and to properly normalize and sort the dates. Here is a list of some of the variants you might encounter in this dataset:* 04/20/2009; 04/20/09; 4/20/09; 4/3/09* Mar-20-2009; Mar 20, 2009; March 20, 2009; Mar. 20, 2009; Mar 20 2009;* 20 Mar 2009; 20 March 2009; 20 Mar. 2009; 20 March, 2009* Mar 20th, 2009; Mar 21st, 2009; Mar 22nd, 2009* Feb 2009; Sep 2009; Oct 2010* 6/2008; 12/2009* 2009; 2010Once you have extracted these date patterns from the text, the next step is to sort them in ascending chronological order accoring to the following rules:* Assume all dates in xx/xx/xx format are mm/dd/yy* Assume all dates where year is encoded in only two digits are years from the 1900's (e.g. 1/5/89 is January 5th, 1989)* If the day is missing (e.g. 9/2009), assume it is the first day of the month (e.g. September 1, 2009).* If the month is missing (e.g. 2010), assume it is the first of January of that year (e.g. January 1, 2010).With these rules in mind, find the correct date in each note and return a pandas Series in chronological order of the original Series' indices.For example if the original series was this: 0 1999 1 2010 2 1978 3 2015 4 1985Your function should return this: 0 2 1 4 2 0 3 1 4 3Your score will be calculated using [Kendall's tau](https://en.wikipedia.org/wiki/Kendall_rank_correlation_coefficient), a correlation measure for ordinal data.*This function should return a Series of length 500 and dtype int.*
###Code
import pandas as pd
doc = []
with open('dates.txt') as file:
for line in file:
doc.append(line)
df = pd.Series(doc)
df.head(10)
def date_sorter():
# Your code here
return # Your answer here
###Output
_____no_output_____ |
inpainting.ipynb | ###Markdown
Pretrain a UNet on Unlabeled imagesHere you are going to pretrain a UNet (and find the best performing UNet architectures —feature map counts and sizes), using the best performing optimizer (and find the best optimization algorithm and hyperparameters including base learning-rate and ...) and learning rate scheduler.To change the training parameters and configurations, you can config the `inpainting.yaml` file, and run the training script (`pretrain.py`). SetupRun the following codes only once after setting up your colab instance and before experimenting:
###Code
# connect your google drive
from google.colab import drive
drive.mount('/content/drive')
# setup codebase (clone repository and install dependancies)
!git clone https://[email protected]/vahidzee/osail-maryam.git
!pip install -r osail-maryam/requirements.txt
!nvidia-smi
# change working directory to repository folder
cd osail-maryam
# update codebase (look for changes)
!git pull
# copy inpainting data
# TODO: change the source directory if necessary
!cp -rv /content/drive/MyDrive/lab/OSAIL_Data/Unlabeled .
# setup logger (tensorboard)
%load_ext tensorboard
%tensorboard --logdir lightning_logs
###Output
_____no_output_____
###Markdown
ExperimentsSetup your desired configurations in `inpainting.yaml` then run the following cell:
###Code
# train model
# TODO: change the configurations from the inpainting.yaml file
# ---------------
# look at https://pytorch-lightning.readthedocs.io/en/stable/common/trainer.html#trainer-flags
# for trainer specific configurations
# ---------------
# look at https://pytorch-lightning.readthedocs.io/en/stable/common/lightning_cli.html
# for general configurations and syntax (you should vary trainer params, model params, optimizer/scheduler)
# to reach the best results
!python pretrain.py fit --config inpainting.yaml --data.root=./Unlabeled --data.num_workers=1
###Output
python3: can't open file 'main.py': [Errno 2] No such file or directory
###Markdown
Code for **"Inpainting"** figures $6$, $8$ and 7 (top) from the main paper.
###Code
!git clone https://github.com/satoshi-kosugi/DeepImagePrior
! python /content/DeepImagePrior/inpainting.py '/content/DeepImagePrior/dataset/inpainting/2_0_orig.png' '/content/DeepImagePrior/dataset/inpainting/2_0_mask.png' '/content/DeepImagePrior/dataset/inpainting/2_0_res.png'
import skimage.io as skio
org_img = skio.imread('/content/DeepImagePrior/dataset/inpainting/thm_dir_N00_000.png')
org_mask = skio.imread('/content/DeepImagePrior/dataset/inpainting/mask_thm_dir_N00.png')
new_img_3840 = org_img[0:3840,0:3840]
new_mask_3840 = org_mask[0:3840,0:3840]
new_img_1920 = org_img[0:1920,0:1920]
new_mask_1920 = org_mask[0:1920,0:1920]
new_img_960 = org_img[0:960,0:960]
new_mask_960 = org_mask[0:960,0:960]
skio.imsave('/content/DeepImagePrior/dataset/inpainting/im3840.png',new_img_3840)
skio.imsave('/content/DeepImagePrior/dataset/inpainting/msk3840.png',new_mask_3840)
skio.imsave('/content/DeepImagePrior/dataset/inpainting/im1920.png',new_img_1920)
skio.imsave('/content/DeepImagePrior/dataset/inpainting/msk1920.png',new_mask_1920)
skio.imsave('/content/DeepImagePrior/dataset/inpainting/img960.png',new_img_960)
skio.imsave('/content/DeepImagePrior/dataset/inpainting/msk960.png',new_mask_960)
! python /content/DeepImagePrior/inpainting.py '/content/DeepImagePrior/dataset/inpainting/im3840.png' '/content/DeepImagePrior/dataset/inpainting/msk3840.png' '/content/DeepImagePrior/dataset/inpainting/res_3840.png'
! python /content/DeepImagePrior/inpainting.py '/content/DeepImagePrior/dataset/inpainting/kate.png' '/content/DeepImagePrior/dataset/inpainting/kate_mask.png' '/content/DeepImagePrior/dataset/inpainting/kate_res.png'
! python /content/DeepImagePrior/inpainting.py '/content/DeepImagePrior/dataset/inpainting/img960.png' '/content/DeepImagePrior/dataset/inpainting/msk960.png' '/content/DeepImagePrior/dataset/inpainting/res_960.png'
skio.imread('/content/DeepImagePrior/dataset/inpainting/im960.png')
! python /content/DeepImagePrior/inpainting.py '/content/DeepImagePrior/dataset/inpainting/vase.png' '/content/DeepImagePrior/dataset/inpainting/vase_mask.png' '/content/DeepImagePrior/dataset/inpainting/res_vase.png'
! python /content/DeepImagePrior/inpainting.py '/content/DeepImagePrior/dataset/inpainting/14_13.png' '/content/DeepImagePrior/dataset/inpainting/delete_this.png' '/content/DeepImagePrior/dataset/inpainting/res_14_13.png'
skio.imread('/content/DeepImagePrior/dataset/inpainting/0_4.png').shape
import skimage.io as skio
a = skio.imread('/content/DeepImagePrior/dataset/inpainting/thm_dir_N00_000.png')
a
###Output
_____no_output_____
###Markdown
Import libs
###Code
! pip install torch torchvision
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib inline
#from keras import models
from . import models
import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import numpy as np
from models.resnet import ResNet
from models.unet import UNet
from models.skip import skip
import torch
import torch.optim
from utils.inpainting_utils import *
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark =True
dtype = torch.cuda.FloatTensor
PLOT = True
imsize = -1
dim_div_by = 64
###Output
_____no_output_____
###Markdown
Choose figure
###Code
## Fig 6
# img_path = 'data/inpainting/vase.png'
# mask_path = 'data/inpainting/vase_mask.png'
## Fig 8
# img_path = 'data/inpainting/library.png'
# mask_path = 'data/inpainting/library_mask.png'
## Fig 7 (top)
img_path = 'data/inpainting/kate.png'
mask_path = 'data/inpainting/kate_mask.png'
# Another text inpainting example
# img_path = 'data/inpainting/peppers.png'
# mask_path = 'data/inpainting/peppers_mask.png'
NET_TYPE = 'skip_depth6' # one of skip_depth4|skip_depth2|UNET|ResNet
###Output
_____no_output_____
###Markdown
Load mask
###Code
img_pil, img_np = get_image(img_path, imsize)
img_mask_pil, img_mask_np = get_image(mask_path, imsize)
###Output
_____no_output_____
###Markdown
Center crop
###Code
img_mask_pil = crop_image(img_mask_pil, dim_div_by)
img_pil = crop_image(img_pil, dim_div_by)
img_np = pil_to_np(img_pil)
img_mask_np = pil_to_np(img_mask_pil)
###Output
_____no_output_____
###Markdown
Visualize
###Code
img_mask_var = np_to_torch(img_mask_np).type(dtype)
plot_image_grid([img_np, img_mask_np, img_mask_np*img_np], 3,11);
###Output
_____no_output_____
###Markdown
Setup
###Code
pad = 'reflection' # 'zero'
OPT_OVER = 'net'
OPTIMIZER = 'adam'
if 'vase.png' in img_path:
INPUT = 'meshgrid'
input_depth = 2
LR = 0.01
num_iter = 5001
param_noise = False
show_every = 50
figsize = 5
reg_noise_std = 0.03
net = skip(input_depth, img_np.shape[0],
num_channels_down = [128] * 5,
num_channels_up = [128] * 5,
num_channels_skip = [0] * 5,
upsample_mode='nearest', filter_skip_size=1, filter_size_up=3, filter_size_down=3,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
elif ('kate.png' in img_path) or ('peppers.png' in img_path):
# Same params and net as in super-resolution and denoising
INPUT = 'noise'
input_depth = 32
LR = 0.01
num_iter = 6001
param_noise = False
show_every = 50
figsize = 5
reg_noise_std = 0.03
net = skip(input_depth, img_np.shape[0],
num_channels_down = [128] * 5,
num_channels_up = [128] * 5,
num_channels_skip = [128] * 5,
filter_size_up = 3, filter_size_down = 3,
upsample_mode='nearest', filter_skip_size=1,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
elif 'library.png' in img_path:
INPUT = 'noise'
input_depth = 1
num_iter = 3001
show_every = 50
figsize = 8
reg_noise_std = 0.00
param_noise = True
if 'skip' in NET_TYPE:
depth = int(NET_TYPE[-1])
net = skip(input_depth, img_np.shape[0],
num_channels_down = [16, 32, 64, 128, 128, 128][:depth],
num_channels_up = [16, 32, 64, 128, 128, 128][:depth],
num_channels_skip = [0, 0, 0, 0, 0, 0][:depth],
filter_size_up = 3,filter_size_down = 5, filter_skip_size=1,
upsample_mode='nearest', # downsample_mode='avg',
need1x1_up=False,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
LR = 0.01
elif NET_TYPE == 'UNET':
net = UNet(num_input_channels=input_depth, num_output_channels=3,
feature_scale=8, more_layers=1,
concat_x=False, upsample_mode='deconv',
pad='zero', norm_layer=torch.nn.InstanceNorm2d, need_sigmoid=True, need_bias=True)
LR = 0.001
param_noise = False
elif NET_TYPE == 'ResNet':
net = ResNet(input_depth, img_np.shape[0], 8, 32, need_sigmoid=True, act_fun='LeakyReLU')
LR = 0.001
param_noise = False
else:
assert False
else:
assert False
net = net.type(dtype)
net_input = get_noise(input_depth, INPUT, img_np.shape[1:]).type(dtype)
# Compute number of parameters
s = sum(np.prod(list(p.size())) for p in net.parameters())
print ('Number of params: %d' % s)
# Loss
mse = torch.nn.MSELoss().type(dtype)
img_var = np_to_torch(img_np).type(dtype)
mask_var = np_to_torch(img_mask_np).type(dtype)
###Output
_____no_output_____
###Markdown
Main loop
###Code
i = 0
def closure():
global i
if param_noise:
for n in [x for x in net.parameters() if len(x.size()) == 4]:
n = n + n.detach().clone().normal_() * n.std() / 50
net_input = net_input_saved
if reg_noise_std > 0:
net_input = net_input_saved + (noise.normal_() * reg_noise_std)
out = net(net_input)
total_loss = mse(out * mask_var, img_var * mask_var)
total_loss.backward()
print ('Iteration %05d Loss %f' % (i, total_loss.item()), '\r', end='')
if PLOT and i % show_every == 0:
out_np = torch_to_np(out)
plot_image_grid([np.clip(out_np, 0, 1)], factor=figsize, nrow=1)
i += 1
return total_loss
net_input_saved = net_input.detach().clone()
noise = net_input.detach().clone()
p = get_params(OPT_OVER, net, net_input)
optimize(OPTIMIZER, p, closure, LR, num_iter)
out_np = torch_to_np(net(net_input))
plot_image_grid([out_np], factor=5);
###Output
_____no_output_____
###Markdown
Inpainting with the deep decoder
###Code
from __future__ import print_function
import matplotlib.pyplot as plt
#%matplotlib notebook
import os
import warnings
warnings.filterwarnings('ignore')
from include import *
from PIL import Image
import PIL
import numpy as np
import torch
import torch.optim
from torch.autograd import Variable
GPU = True
if GPU == True:
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark = True
dtype = torch.cuda.FloatTensor
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
print("num GPUs",torch.cuda.device_count())
else:
dtype = torch.FloatTensor
###Output
num GPUs 1
###Markdown
Load image
###Code
path = './test_data/'
img_name = "poster"
img_path = path + img_name + ".png"
img_pil = Image.open(img_path)
img_np = pil_to_np(img_pil)
img_clean_var = np_to_var(img_np).type(dtype)
output_depth = img_np.shape[0]
img_mask_pil = Image.open('./test_data/mask.png')
mask_np = pil_to_np(img_mask_pil)
mask_np = np.array([mask_np[0,:,:] / np.max(mask_np) ] * output_depth)
mask_var = np_to_var(mask_np).type(dtype)
###Output
_____no_output_____
###Markdown
Generate inpainted image
###Code
img_noisy_var = img_clean_var * mask_var
img_noisy_np = var_to_np(img_noisy_var)
###Output
_____no_output_____
###Markdown
Recover image
###Code
num_channels = [256]*5
net = decodernw(output_depth,num_channels_up=num_channels,upsample_first = True).type(dtype)
print("number of parameters: ", num_param(net))
rnd = 500
numit = 40000
rn = 0.005
mse_n, mse_t, ni, net = fit( num_channels=num_channels,
reg_noise_std=rn,
reg_noise_decayevery = rnd,
num_iter=numit,
LR=0.0025,
img_noisy_var=img_noisy_var,
net=net,
img_clean_var=img_clean_var,
mask_var = mask_var,
find_best=True,
)
def myimgshow(plt,img):
plt.imshow(np.clip(img.transpose(1, 2, 0),0,1))
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(131)
myimgshow(ax1,img_np)
ax1.set_title('Original image')
ax1.axis('off')
ax2 = fig.add_subplot(132)
myimgshow(ax2,img_noisy_np)
ax2.set_title( "Noisy observation, PSNR: %.2f" % psnr(img_np,img_noisy_np) )
ax2.axis('off')
out_img_np = net( ni.type(dtype) ).data.cpu().numpy()[0]
ax3 = fig.add_subplot(133)
myimgshow(ax3,out_img_np)
ax3.set_title( "Deep-Decoder recovered image, SNR: %.2f" % psnr(img_np,out_img_np) )
ax3.axis('off')
###Output
_____no_output_____
###Markdown
inpaintingrepairing images with defects/holes- channel attention mechanism [[paper]](https://arxiv.org/abs/1807.02758)- residual in residual architecture [[paper]](https://arxiv.org/abs/1505.04597)- subpixel convolution / pixelshuffle [[paper]](https://arxiv.org/abs/1609.05158)- running on [tensorflow/google colab](https://colab.research.google.com/) AND on [plaidml](https://www.intel.ai/plaidml/)- using the famous [Set14](https://www.google.com/search?q=set14) dataset ONLY (with heavy augmentation) - no validation neededjupyter notebook by [Benjamin Wegener](https://scholar.google.de/citations?user=yEn9St8AAAAJ) from [github](https://www.github.com/BenjaminWegener/keras-examples) options
###Code
run_on_google_colab = True #use PlaidML as Backend, change this to 'True' to run on colab/tf
epochs = 25 #Number of epochs to train
channels = 3 #channels of low resolution image
batch_size = 14 #what batch-size should we use (decrease if you encounter video memory errors)
steps_per_epoch = 1000 #How much iterations per epoch to train
height_lr = 128 #height of low resolution image (must be dividable by 4)
width_lr = height_lr #width of low resolution image (must be dividable by 4)
gen_lr = 0.001 #learning rate of generator
logging_steps = 75 #how often to update the training log
holesize_max = 0.4 # max size of holes in % of image
holesize_min = holesize_max / 2 # min size of holes in % of image
holes = 3 #max number of holes in circle
###Output
_____no_output_____
###Markdown
imports
###Code
import os
if run_on_google_colab:
%cd /content
!git clone https://github.com/BenjaminWegener/keras-examples #download Dataset
%cd keras-examples
else:
os.environ['KERAS_BACKEND'] = 'plaidml.keras.backend'
import numpy as np
from keras.models import Model, Input, load_model
from keras.layers import *
from keras.optimizers import Adam
from keras import backend as K
from keras.callbacks import LambdaCallback
from IPython.display import clear_output
import matplotlib.pyplot as plt
from keras.preprocessing.image import ImageDataGenerator
import random
from skimage.draw import circle
%matplotlib inline
###Output
/content
fatal: destination path 'keras-examples' already exists and is not an empty directory.
/content/keras-examples
###Markdown
function for image visualization
###Code
def show(tensors):
plt.rcParams['figure.figsize'] = [20, 10]
fig = plt.figure()
for i in range(len(tensors)):
try:
tensors[i] = np.squeeze(tensors[i], axis = 0)
except:
pass
tensors[i] = (tensors[i] + 1.) * 127.5
fig.add_subplot(1,len(tensors), i + 1)
plt.imshow(tensors[i].astype(np.uint8), interpolation = 'nearest')
plt.setp(plt.gcf().get_axes(), xticks=[], yticks=[]);
plt.show()
###Output
_____no_output_____
###Markdown
dataset function
###Code
# return batch of augmented train and target images with quantity n_samples
def get_batch(n_samples, height, width, channels):
# define a ImageGenerator instance from keras with augmentations
image_gen = ImageDataGenerator(rotation_range=360,
width_shift_range=0.5,
height_shift_range=0.5,
zoom_range=[0.2, 0.7],
horizontal_flip=True,
vertical_flip=True,
fill_mode='reflect',
data_format='channels_last',
brightness_range=[0.5, 1.5])
#seed for random augmentations
random_seed = int(random.random() * 100000)
#generate augmented images
y_train = image_gen.flow_from_directory('.', target_size = (height, width), batch_size = n_samples, class_mode = None, seed = random_seed)
y_train = y_train.__getitem__(0).copy() #fix for 'array doesn't own its data'
x_train = y_train.copy()
for i in range(n_samples):
# source images are cut (cirlewise holes in lightgreen)
for j in range(holes):
if (random.random() * 2 > 1) or (j == 1): #50% chance to draw more than 1 hole
circle_size = int((random.random() * height * holesize_max - height * holesize_min) + height * holesize_min) // 2 #/ 2 because it's radius
circle_center_x = int(random.random() * height - circle_size - 1)
circle_center_y = int(random.random() * width - circle_size - 1)
rr, cc = circle(circle_center_x, circle_center_y, circle_size)
x_train[i][rr,cc,0] = 0
x_train[i][rr,cc,1] = 255
x_train[i][rr,cc,2] = 0
#normalize images to [-1, 1]
x_train = x_train/127.5 - 1.
y_train = y_train/127.5 - 1.
return x_train, y_train
###Output
_____no_output_____
###Markdown
base functions
###Code
def fast_normalization(x): # use clipping instead of batchnormalization for network stabilization
return Lambda(lambda x: K.clip(x, -1, 1), output_shape=lambda s: (s[0], s[1], s[2], s[3]))(x)
def residual_block(inputs): #combined pixel shuffle and squeeze
x = inputs
x = Conv2D(32, kernel_size = 9, activation = 'tanh', padding = 'same', strides = 2)(x)
x = SeparableConv2D(128, kernel_size = 9, activation = 'tanh', padding = 'same')(x) # rapidly increase speed at slightly worse results
x = fast_normalization(x)
x = Lambda(lambda x: K.reshape(x, (K.shape(x)[0], K.shape(x)[1], K.shape(x)[2], 32, 2, 2)), output_shape = lambda s: (s[0], s[1], s[2], s[3] // 4, 2, 2))(x)
x = Permute((3, 2, 4, 1, 5))(x)
x = Lambda(lambda x: K.reshape(x, (K.shape(x)[0], K.shape(x)[1], K.shape(x)[2] * K.shape(x)[3], K.shape(x)[4] * K.shape(x)[5])), output_shape = lambda s: (s[0], s[1], s[2] * s[3], s[4] * s[5]))(x)
x = Permute((3, 2, 1))(x)
#---
x1 = x
x = GlobalAveragePooling2D()(x)
x = Dense(8, activation = 'relu')(x) #reduction like in RCAN
x = Dense(32, activation = 'hard_sigmoid')(x)
x = Reshape((1, 1, 32))(x)
x = Multiply()([x1, x])
x = Add()([inputs, x])
return x
###Output
_____no_output_____
###Markdown
build generator model
###Code
x = inputs = Input(shape = (height_lr, width_lr, channels))
x = Conv2D(32, kernel_size = 3, padding = 'same', activation = 'tanh')(x)
x = residual_block(x)
x = residual_block(x)
x = residual_block(x)
x = residual_block(x)
x = Conv2D(3, kernel_size = 3, padding = 'same', activation = 'tanh')(x)
x = fast_normalization(x)
generator = Model(inputs = inputs, outputs = x)
generator.summary()
###Output
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:66: The name tf.get_default_graph is deprecated. Please use tf.compat.v1.get_default_graph instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:541: The name tf.placeholder is deprecated. Please use tf.compat.v1.placeholder instead.
WARNING:tensorflow:From /usr/local/lib/python3.6/dist-packages/keras/backend/tensorflow_backend.py:4432: The name tf.random_uniform is deprecated. Please use tf.random.uniform instead.
Model: "model_1"
__________________________________________________________________________________________________
Layer (type) Output Shape Param # Connected to
==================================================================================================
input_1 (InputLayer) (None, 128, 128, 3) 0
__________________________________________________________________________________________________
conv2d_1 (Conv2D) (None, 128, 128, 32) 896 input_1[0][0]
__________________________________________________________________________________________________
conv2d_2 (Conv2D) (None, 64, 64, 32) 82976 conv2d_1[0][0]
__________________________________________________________________________________________________
separable_conv2d_1 (SeparableCo (None, 64, 64, 128) 6816 conv2d_2[0][0]
__________________________________________________________________________________________________
lambda_1 (Lambda) (None, 64, 64, 128) 0 separable_conv2d_1[0][0]
__________________________________________________________________________________________________
lambda_2 (Lambda) (None, 64, 64, 32, 2 0 lambda_1[0][0]
__________________________________________________________________________________________________
permute_1 (Permute) (None, 32, 64, 2, 64 0 lambda_2[0][0]
__________________________________________________________________________________________________
lambda_3 (Lambda) (None, 32, 128, 128) 0 permute_1[0][0]
__________________________________________________________________________________________________
permute_2 (Permute) (None, 128, 128, 32) 0 lambda_3[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_1 (Glo (None, 32) 0 permute_2[0][0]
__________________________________________________________________________________________________
dense_1 (Dense) (None, 8) 264 global_average_pooling2d_1[0][0]
__________________________________________________________________________________________________
dense_2 (Dense) (None, 32) 288 dense_1[0][0]
__________________________________________________________________________________________________
reshape_1 (Reshape) (None, 1, 1, 32) 0 dense_2[0][0]
__________________________________________________________________________________________________
multiply_1 (Multiply) (None, 128, 128, 32) 0 permute_2[0][0]
reshape_1[0][0]
__________________________________________________________________________________________________
add_1 (Add) (None, 128, 128, 32) 0 conv2d_1[0][0]
multiply_1[0][0]
__________________________________________________________________________________________________
conv2d_3 (Conv2D) (None, 64, 64, 32) 82976 add_1[0][0]
__________________________________________________________________________________________________
separable_conv2d_2 (SeparableCo (None, 64, 64, 128) 6816 conv2d_3[0][0]
__________________________________________________________________________________________________
lambda_4 (Lambda) (None, 64, 64, 128) 0 separable_conv2d_2[0][0]
__________________________________________________________________________________________________
lambda_5 (Lambda) (None, 64, 64, 32, 2 0 lambda_4[0][0]
__________________________________________________________________________________________________
permute_3 (Permute) (None, 32, 64, 2, 64 0 lambda_5[0][0]
__________________________________________________________________________________________________
lambda_6 (Lambda) (None, 32, 128, 128) 0 permute_3[0][0]
__________________________________________________________________________________________________
permute_4 (Permute) (None, 128, 128, 32) 0 lambda_6[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_2 (Glo (None, 32) 0 permute_4[0][0]
__________________________________________________________________________________________________
dense_3 (Dense) (None, 8) 264 global_average_pooling2d_2[0][0]
__________________________________________________________________________________________________
dense_4 (Dense) (None, 32) 288 dense_3[0][0]
__________________________________________________________________________________________________
reshape_2 (Reshape) (None, 1, 1, 32) 0 dense_4[0][0]
__________________________________________________________________________________________________
multiply_2 (Multiply) (None, 128, 128, 32) 0 permute_4[0][0]
reshape_2[0][0]
__________________________________________________________________________________________________
add_2 (Add) (None, 128, 128, 32) 0 add_1[0][0]
multiply_2[0][0]
__________________________________________________________________________________________________
conv2d_4 (Conv2D) (None, 64, 64, 32) 82976 add_2[0][0]
__________________________________________________________________________________________________
separable_conv2d_3 (SeparableCo (None, 64, 64, 128) 6816 conv2d_4[0][0]
__________________________________________________________________________________________________
lambda_7 (Lambda) (None, 64, 64, 128) 0 separable_conv2d_3[0][0]
__________________________________________________________________________________________________
lambda_8 (Lambda) (None, 64, 64, 32, 2 0 lambda_7[0][0]
__________________________________________________________________________________________________
permute_5 (Permute) (None, 32, 64, 2, 64 0 lambda_8[0][0]
__________________________________________________________________________________________________
lambda_9 (Lambda) (None, 32, 128, 128) 0 permute_5[0][0]
__________________________________________________________________________________________________
permute_6 (Permute) (None, 128, 128, 32) 0 lambda_9[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_3 (Glo (None, 32) 0 permute_6[0][0]
__________________________________________________________________________________________________
dense_5 (Dense) (None, 8) 264 global_average_pooling2d_3[0][0]
__________________________________________________________________________________________________
dense_6 (Dense) (None, 32) 288 dense_5[0][0]
__________________________________________________________________________________________________
reshape_3 (Reshape) (None, 1, 1, 32) 0 dense_6[0][0]
__________________________________________________________________________________________________
multiply_3 (Multiply) (None, 128, 128, 32) 0 permute_6[0][0]
reshape_3[0][0]
__________________________________________________________________________________________________
add_3 (Add) (None, 128, 128, 32) 0 add_2[0][0]
multiply_3[0][0]
__________________________________________________________________________________________________
conv2d_5 (Conv2D) (None, 64, 64, 32) 82976 add_3[0][0]
__________________________________________________________________________________________________
separable_conv2d_4 (SeparableCo (None, 64, 64, 128) 6816 conv2d_5[0][0]
__________________________________________________________________________________________________
lambda_10 (Lambda) (None, 64, 64, 128) 0 separable_conv2d_4[0][0]
__________________________________________________________________________________________________
lambda_11 (Lambda) (None, 64, 64, 32, 2 0 lambda_10[0][0]
__________________________________________________________________________________________________
permute_7 (Permute) (None, 32, 64, 2, 64 0 lambda_11[0][0]
__________________________________________________________________________________________________
lambda_12 (Lambda) (None, 32, 128, 128) 0 permute_7[0][0]
__________________________________________________________________________________________________
permute_8 (Permute) (None, 128, 128, 32) 0 lambda_12[0][0]
__________________________________________________________________________________________________
global_average_pooling2d_4 (Glo (None, 32) 0 permute_8[0][0]
__________________________________________________________________________________________________
dense_7 (Dense) (None, 8) 264 global_average_pooling2d_4[0][0]
__________________________________________________________________________________________________
dense_8 (Dense) (None, 32) 288 dense_7[0][0]
__________________________________________________________________________________________________
reshape_4 (Reshape) (None, 1, 1, 32) 0 dense_8[0][0]
__________________________________________________________________________________________________
multiply_4 (Multiply) (None, 128, 128, 32) 0 permute_8[0][0]
reshape_4[0][0]
__________________________________________________________________________________________________
add_4 (Add) (None, 128, 128, 32) 0 add_3[0][0]
multiply_4[0][0]
__________________________________________________________________________________________________
conv2d_6 (Conv2D) (None, 128, 128, 3) 867 add_4[0][0]
__________________________________________________________________________________________________
lambda_13 (Lambda) (None, 128, 128, 3) 0 conv2d_6[0][0]
==================================================================================================
Total params: 363,139
Trainable params: 363,139
Non-trainable params: 0
__________________________________________________________________________________________________
###Markdown
train
###Code
#load checkpoint & compile the generator network
print('trying to load last saved weights...', end = ' ')
try:
generator.load_weights('inpainting_weights')
print('success.')
except:
print('failed')
pass
generator.compile(optimizer = Adam(gen_lr), loss = 'mae')
# Train generator
def logging(epoch, logs):
if epoch % logging_steps == 0:
testX, testY = get_batch(1, height_lr, width_lr, channels)
clear_output()
print('epoch', real_epoch + 1, '/', epochs, '--> step', epoch, '/', steps_per_epoch, ': loss', logs['loss'])
testZ = generator.predict(testX)
show([testX, testZ, testY])
print('test_loss:', generator.evaluate(testX, testY, verbose = 0))
logging_callback = LambdaCallback(
on_epoch_end=lambda epoch, logs: logging(epoch, logs)
)
for real_epoch in range(epochs):
X, Y = get_batch(batch_size, height_lr, width_lr, channels)
generator.fit(X, Y, batch_size, epochs = steps_per_epoch, verbose = 0, callbacks = [logging_callback], shuffle = True)
try:
print('trying to save weights...', end = ' ')
generator.save_weights('inpainting_weights')
except:
print('failed.')
###Output
epoch 25 / 25 --> step 1000 / 1000 : loss 0.014329605735838413
###Markdown
validate on complete picture
###Code
from PIL import Image
testY = np.array(Image.open('./Set14/lenna.png'))
testX = testY.copy()
height = testX.shape[0]
width = testX.shape[1]
# source image is cut (cirlewise holes in lightgreen)
for j in range(holes):
if (random.random() * 2 > 1) or (j == 1): #50% chance to draw more than 1 hole
circle_size = int((random.random() * height * holesize_max - height * holesize_min) + height * holesize_min) // 2 #/ 2 because it's radius
circle_center_x = int(random.random() * height - circle_size - 1)
circle_center_y = int(random.random() * width - circle_size - 1)
rr, cc = circle(circle_center_x, circle_center_y, circle_size)
testX[rr,cc,0] = 0
testX[rr,cc,1] = 255
testX[rr,cc,2] = 0
testX = testX /127.5 - 1
testY = testY /127.5 - 1
x = inputs = Input(shape = testX.shape)
x = Conv2D(32, kernel_size = 3, padding = 'same', activation = 'tanh')(x)
x = residual_block(x)
x = residual_block(x)
x = residual_block(x)
x = residual_block(x)
x = Conv2D(3, kernel_size = 3, padding = 'same', activation = 'tanh')(x)
x = fast_normalization(x)
generator = Model(inputs = inputs, outputs = x)
print('trying to load last saved weights...', end = ' ')
try:
generator.load_weights('inpainting_weights')
print('success.')
except:
print('failed')
pass
predicted = generator.predict(np.expand_dims((testX), 0))
show([testX, predicted, testY])
predicted = np.squeeze(predicted)
predicted = Image.fromarray(((predicted + 1) * 127.5).astype(np.uint8))
print('trying to save image as \'inpainting_result.png\'...', end = ' ')
try:
predicted.save('inpainting_result.png', "PNG")
print('success.')
except:
print('failed.')
pass
###Output
trying to load last saved weights... success.
###Markdown
Code for **"Inpainting"** figures $6$, $8$ and 7 (top) from the main paper.
###Code
"""
*Uncomment if running on colab*
Set Runtime -> Change runtime type -> Under Hardware Accelerator select GPU in Google Colab
"""
# !git clone https://github.com/DmitryUlyanov/deep-image-prior
# !mv deep-image-prior/* ./
###Output
_____no_output_____
###Markdown
Import libs
###Code
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib inline
import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import numpy as np
from models.resnet import ResNet
from models.unet import UNet
from models.skip import skip
import torch
import torch.optim
from utils.inpainting_utils import *
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark =True
dtype = torch.cuda.FloatTensor
PLOT = True
imsize = -1
dim_div_by = 64
###Output
_____no_output_____
###Markdown
Choose figure
###Code
## Fig 6
# img_path = 'data/inpainting/vase.png'
# mask_path = 'data/inpainting/vase_mask.png'
## Fig 8
# img_path = 'data/inpainting/library.png'
# mask_path = 'data/inpainting/library_mask.png'
## Fig 7 (top)
img_path = 'data/inpainting/kate.png'
mask_path = 'data/inpainting/kate_mask.png'
# Another text inpainting example
# img_path = 'data/inpainting/peppers.png'
# mask_path = 'data/inpainting/peppers_mask.png'
NET_TYPE = 'skip_depth6' # one of skip_depth4|skip_depth2|UNET|ResNet
###Output
_____no_output_____
###Markdown
Load mask
###Code
img_pil, img_np = get_image(img_path, imsize)
img_mask_pil, img_mask_np = get_image(mask_path, imsize)
###Output
_____no_output_____
###Markdown
Center crop
###Code
img_mask_pil = crop_image(img_mask_pil, dim_div_by)
img_pil = crop_image(img_pil, dim_div_by)
img_np = pil_to_np(img_pil)
img_mask_np = pil_to_np(img_mask_pil)
###Output
_____no_output_____
###Markdown
Visualize
###Code
img_mask_var = np_to_torch(img_mask_np).type(dtype)
plot_image_grid([img_np, img_mask_np, img_mask_np*img_np], 3,11);
###Output
_____no_output_____
###Markdown
Setup
###Code
pad = 'reflection' # 'zero'
OPT_OVER = 'net'
OPTIMIZER = 'adam'
if 'vase.png' in img_path:
INPUT = 'meshgrid'
input_depth = 2
LR = 0.01
num_iter = 5001
param_noise = False
show_every = 50
figsize = 5
reg_noise_std = 0.03
net = skip(input_depth, img_np.shape[0],
num_channels_down = [128] * 5,
num_channels_up = [128] * 5,
num_channels_skip = [0] * 5,
upsample_mode='nearest', filter_skip_size=1, filter_size_up=3, filter_size_down=3,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
elif ('kate.png' in img_path) or ('peppers.png' in img_path):
# Same params and net as in super-resolution and denoising
INPUT = 'noise'
input_depth = 32
LR = 0.01
num_iter = 6001
param_noise = False
show_every = 50
figsize = 5
reg_noise_std = 0.03
net = skip(input_depth, img_np.shape[0],
num_channels_down = [128] * 5,
num_channels_up = [128] * 5,
num_channels_skip = [128] * 5,
filter_size_up = 3, filter_size_down = 3,
upsample_mode='nearest', filter_skip_size=1,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
elif 'library.png' in img_path:
INPUT = 'noise'
input_depth = 1
num_iter = 3001
show_every = 50
figsize = 8
reg_noise_std = 0.00
param_noise = True
if 'skip' in NET_TYPE:
depth = int(NET_TYPE[-1])
net = skip(input_depth, img_np.shape[0],
num_channels_down = [16, 32, 64, 128, 128, 128][:depth],
num_channels_up = [16, 32, 64, 128, 128, 128][:depth],
num_channels_skip = [0, 0, 0, 0, 0, 0][:depth],
filter_size_up = 3,filter_size_down = 5, filter_skip_size=1,
upsample_mode='nearest', # downsample_mode='avg',
need1x1_up=False,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
LR = 0.01
elif NET_TYPE == 'UNET':
net = UNet(num_input_channels=input_depth, num_output_channels=3,
feature_scale=8, more_layers=1,
concat_x=False, upsample_mode='deconv',
pad='zero', norm_layer=torch.nn.InstanceNorm2d, need_sigmoid=True, need_bias=True)
LR = 0.001
param_noise = False
elif NET_TYPE == 'ResNet':
net = ResNet(input_depth, img_np.shape[0], 8, 32, need_sigmoid=True, act_fun='LeakyReLU')
LR = 0.001
param_noise = False
else:
assert False
else:
assert False
net = net.type(dtype)
net_input = get_noise(input_depth, INPUT, img_np.shape[1:]).type(dtype)
# Compute number of parameters
s = sum(np.prod(list(p.size())) for p in net.parameters())
print ('Number of params: %d' % s)
# Loss
mse = torch.nn.MSELoss().type(dtype)
img_var = np_to_torch(img_np).type(dtype)
mask_var = np_to_torch(img_mask_np).type(dtype)
###Output
_____no_output_____
###Markdown
Main loop
###Code
i = 0
def closure():
global i
if param_noise:
for n in [x for x in net.parameters() if len(x.size()) == 4]:
n = n + n.detach().clone().normal_() * n.std() / 50
net_input = net_input_saved
if reg_noise_std > 0:
net_input = net_input_saved + (noise.normal_() * reg_noise_std)
out = net(net_input)
total_loss = mse(out * mask_var, img_var * mask_var)
total_loss.backward()
print ('Iteration %05d Loss %f' % (i, total_loss.item()), '\r', end='')
if PLOT and i % show_every == 0:
out_np = torch_to_np(out)
plot_image_grid([np.clip(out_np, 0, 1)], factor=figsize, nrow=1)
i += 1
return total_loss
net_input_saved = net_input.detach().clone()
noise = net_input.detach().clone()
p = get_params(OPT_OVER, net, net_input)
optimize(OPTIMIZER, p, closure, LR, num_iter)
out_np = torch_to_np(net(net_input))
plot_image_grid([out_np], factor=5);
###Output
_____no_output_____
###Markdown
Code for **"Inpainting"** figures $6$, $8$ and 7 (top) from the main paper. Import libs
###Code
from __future__ import print_function
import matplotlib.pyplot as plt
%matplotlib inline
import os
# os.environ['CUDA_VISIBLE_DEVICES'] = '1'
import numpy as np
from models.resnet import ResNet
from models.unet import UNet
from models.skip import skip
import torch
import torch.optim
from utils.inpainting_utils import *
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark =True
dtype = torch.cuda.FloatTensor
PLOT = True
imsize = -1
dim_div_by = 64
###Output
_____no_output_____
###Markdown
Choose figure
###Code
## Fig 6
# img_path = 'data/inpainting/vase.png'
# mask_path = 'data/inpainting/vase_mask.png'
## Fig 8
# img_path = 'data/inpainting/library.png'
# mask_path = 'data/inpainting/library_mask.png'
## Fig 7 (top)
img_path = 'data/inpainting/kate.png'
mask_path = 'data/inpainting/kate_mask.png'
# Another text inpainting example
# img_path = 'data/inpainting/peppers.png'
# mask_path = 'data/inpainting/peppers_mask.png'
NET_TYPE = 'skip_depth6' # one of skip_depth4|skip_depth2|UNET|ResNet
###Output
_____no_output_____
###Markdown
Load mask
###Code
img_pil, img_np = get_image(img_path, imsize)
img_mask_pil, img_mask_np = get_image(mask_path, imsize)
###Output
_____no_output_____
###Markdown
Center crop
###Code
img_mask_pil = crop_image(img_mask_pil, dim_div_by)
img_pil = crop_image(img_pil, dim_div_by)
img_np = pil_to_np(img_pil)
img_mask_np = pil_to_np(img_mask_pil)
###Output
_____no_output_____
###Markdown
Visualize
###Code
img_mask_var = np_to_torch(img_mask_np).type(dtype)
plot_image_grid([img_np, img_mask_np, img_mask_np*img_np], 3,11);
###Output
_____no_output_____
###Markdown
Setup
###Code
pad = 'reflection' # 'zero'
OPT_OVER = 'net'
OPTIMIZER = 'adam'
if 'vase.png' in img_path:
INPUT = 'meshgrid'
input_depth = 2
LR = 0.01
num_iter = 5001
param_noise = False
show_every = 50
figsize = 5
reg_noise_std = 0.03
net = skip(input_depth, img_np.shape[0],
num_channels_down = [128] * 5,
num_channels_up = [128] * 5,
num_channels_skip = [0] * 5,
upsample_mode='nearest', filter_skip_size=1, filter_size_up=3, filter_size_down=3,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
elif ('kate.png' in img_path) or ('peppers.png' in img_path):
# Same params and net as in super-resolution and denoising
INPUT = 'noise'
input_depth = 32
LR = 0.01
num_iter = 6001
param_noise = False
show_every = 50
figsize = 5
reg_noise_std = 0.03
net = skip(input_depth, img_np.shape[0],
num_channels_down = [128] * 5,
num_channels_up = [128] * 5,
num_channels_skip = [128] * 5,
filter_size_up = 3, filter_size_down = 3,
upsample_mode='nearest', filter_skip_size=1,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
elif 'library.png' in img_path:
INPUT = 'noise'
input_depth = 1
num_iter = 3001
show_every = 50
figsize = 8
reg_noise_std = 0.00
param_noise = True
if 'skip' in NET_TYPE:
depth = int(NET_TYPE[-1])
net = skip(input_depth, img_np.shape[0],
num_channels_down = [16, 32, 64, 128, 128, 128][:depth],
num_channels_up = [16, 32, 64, 128, 128, 128][:depth],
num_channels_skip = [0, 0, 0, 0, 0, 0][:depth],
filter_size_up = 3,filter_size_down = 5, filter_skip_size=1,
upsample_mode='nearest', # downsample_mode='avg',
need1x1_up=False,
need_sigmoid=True, need_bias=True, pad=pad, act_fun='LeakyReLU').type(dtype)
LR = 0.01
elif NET_TYPE == 'UNET':
net = UNet(num_input_channels=input_depth, num_output_channels=3,
feature_scale=8, more_layers=1,
concat_x=False, upsample_mode='deconv',
pad='zero', norm_layer=torch.nn.InstanceNorm2d, need_sigmoid=True, need_bias=True)
LR = 0.001
param_noise = False
elif NET_TYPE == 'ResNet':
net = ResNet(input_depth, img_np.shape[0], 8, 32, need_sigmoid=True, act_fun='LeakyReLU')
LR = 0.001
param_noise = False
else:
assert False
else:
assert False
net = net.type(dtype)
net_input = get_noise(input_depth, INPUT, img_np.shape[1:]).type(dtype)
# Compute number of parameters
s = sum(np.prod(list(p.size())) for p in net.parameters())
print ('Number of params: %d' % s)
# Loss
mse = torch.nn.MSELoss().type(dtype)
img_var = np_to_torch(img_np).type(dtype)
mask_var = np_to_torch(img_mask_np).type(dtype)
###Output
_____no_output_____
###Markdown
Main loop
###Code
i = 0
def closure():
global i
if param_noise:
for n in [x for x in net.parameters() if len(x.size()) == 4]:
n = n + n.detach().clone().normal_() * n.std() / 50
net_input = net_input_saved
if reg_noise_std > 0:
net_input = net_input_saved + (noise.normal_() * reg_noise_std)
out = net(net_input)
total_loss = mse(out * mask_var, img_var * mask_var)
total_loss.backward()
print ('Iteration %05d Loss %f' % (i, total_loss.item()), '\r', end='')
if PLOT and i % show_every == 0:
out_np = torch_to_np(out)
plot_image_grid([np.clip(out_np, 0, 1)], factor=figsize, nrow=1)
i += 1
return total_loss
net_input_saved = net_input.detach().clone()
noise = net_input.detach().clone()
p = get_params(OPT_OVER, net, net_input)
optimize(OPTIMIZER, p, closure, LR, num_iter)
out_np = torch_to_np(net(net_input))
plot_image_grid([out_np], factor=5);
###Output
_____no_output_____
###Markdown
Inpainting with the deep decoder
###Code
from __future__ import print_function
import matplotlib.pyplot as plt
#%matplotlib notebook
import os
import warnings
warnings.filterwarnings('ignore')
from include import *
from PIL import Image
import PIL
import numpy as np
import torch
import torch.optim
from torch.autograd import Variable
GPU = True
if GPU == True:
torch.backends.cudnn.enabled = True
torch.backends.cudnn.benchmark = True
dtype = torch.cuda.FloatTensor
os.environ['CUDA_VISIBLE_DEVICES'] = '0'
print("num GPUs",torch.cuda.device_count())
else:
dtype = torch.FloatTensor
###Output
num GPUs 1
###Markdown
Helpers: Show image
###Code
def myimgshow(img):
img = img.transpose(1, 2, 0)
if (len(img.shape) == 3):
plt.imshow(np.clip(np.squeeze(img),0,1))
else: #plot grayscale
plt.imshow(np.clip(img,0,1))
plt.grid(False)
plt.axis('off');
plt.colorbar()
def axisimgshow(plt,img):
img = img.transpose(1, 2, 0)
if (len(img.shape) == 3):
plt.imshow(np.clip(np.squeeze(img),0,1))
else: #plot grayscale
plt.imshow(np.clip(img,0,1))
###Output
_____no_output_____
###Markdown
Load image
###Code
path = './test_data/'
img_name = "poster"
img_path = path + img_name + ".png"
img_pil = Image.open(img_path)
img_np = pil_to_np(img_pil)
img_clean_var = np_to_var(img_np).type(dtype)
output_depth = img_np.shape[0]
img_mask_pil = Image.open('./test_data/mask.png')
mask_np = pil_to_np(img_mask_pil)
mask_np = np.array([mask_np[0,:,:] / np.max(mask_np) ] * output_depth)
mask_var = np_to_var(mask_np).type(dtype)
###Output
_____no_output_____
###Markdown
Generate inpainted image
###Code
repo_path = '.'
dir_name = 'test_data'
img_name = "poster.png"
img_path = os.path.join(repo_path, dir_name, img_name)
img_pil = Image.open(img_path)
img_np = pil_to_np(img_pil)
img_clean_var = np_to_var(img_np).type(dtype)
output_depth = img_np.shape[0]
img_mask_path = img_path = os.path.join(repo_path, dir_name, "mask.png")
img_mask_pil = Image.open(img_mask_path)
mask_np = pil_to_np(img_mask_pil)
mask_np = np.array([mask_np[0,:,:] / np.max(mask_np) ] *output_depth)
mask_var = np_to_var(mask_np).type(dtype)
img_noisy_var = img_clean_var * mask_var
img_noisy_np = var_to_np(img_noisy_var)
myimgshow(img_noisy_np)
###Output
_____no_output_____
###Markdown
Load Masked / Painted Image
###Code
dir_name = 'test_data'
img_name = "subtitle_easy.jpg"
fullfilename = os.path.join(repo_path, dir_name, img_name)
img_pil = Image.open(fullfilename)
img_np = pil_to_np(img_pil)
img_np = np.concatenate((img_np,np.ones((1,512,512))), axis=0)
img_clean_var = np_to_var(img_np).type(dtype)
img_noisy_var = img_clean_var
img_noisy_np = var_to_np(img_noisy_var)
myimgshow(img_noisy_np)
###Output
_____no_output_____
###Markdown
Recover image
###Code
%%time
k = 256 #256
numit = 40000
rnd = 500
rn = 0.005
num_channels = [k]*5
net = decodernw(output_depth,num_channels_up=num_channels,upsample_first = True).type(dtype)
print("number of parameters: ", num_param(net))
mse_n, mse_t, ni, net = fit( num_channels=num_channels,
reg_noise_std=rn,
reg_noise_decayevery = rnd,
num_iter=numit,
LR=0.0025,
img_noisy_var=img_noisy_var,
net=net,
img_clean_var=img_clean_var,
mask_var = mask_var,
find_best=True,
)
###Output
number of parameters: 397312
network input shape: [1, 256, 16, 16]
optimize with adam 0.0025
Wall time: 2h 51s Train loss 0.000022 Actual loss 0.000367 Actual loss orig 0.000367 MSE Loss 0.000367 Noise Energy 0.000000
###Markdown
Show Result
###Code
%%time
show_graph(mse_n, mse_t)
fig = plt.figure(figsize = (15,15))
ax1 = fig.add_subplot(131)
axisimgshow(ax1,img_np)
ax1.set_title('Original image')
ax1.axis('off')
if (psnr(img_np,img_noisy_np) != float("inf")):
ax2 = fig.add_subplot(132)
axisimgshow(ax2,img_noisy_np)
ax2.set_title( "Noisy observation, PSNR: %.2f" % psnr(img_np,img_noisy_np) )
ax2.axis('off')
ax3 = fig.add_subplot(133)
else:
ax3 = fig.add_subplot(132)
out_img_np = net( ni.type(dtype) ).data.cpu().numpy()[0]
axisimgshow(ax3,out_img_np)
ax3.set_title( "Deep-Decoder recovered image, PSNR: %.2f" % psnr(img_np,out_img_np) )
ax3.axis('off')
###Output
Wall time: 378 ms
|
tutorials/TFLearn_Sentiment_Analysis/TFLearn_Sentiment_Analysis.ipynb | ###Markdown
Sentiment analysis with TFLearnIn this notebook, we'll continue Andrew Trask's work by building a network for sentiment analysis on the movie review data. Instead of a network written with Numpy, we'll be using [TFLearn](http://tflearn.org/), a high-level library built on top of TensorFlow. TFLearn makes it simpler to build networks just by defining the layers. It takes care of most of the details for you.We'll start off by importing all the modules we'll need, then load and prepare the data.
###Code
import pandas as pd
import numpy as np
import tensorflow as tf
import tflearn
from tflearn.data_utils import to_categorical
###Output
curses is not supported on this machine (please install/reinstall curses for an optimal experience)
###Markdown
Preparing the dataFollowing along with Andrew, our goal here is to convert our reviews into word vectors. The word vectors will have elements representing words in the total vocabulary. If the second position represents the word 'the', for each review we'll count up the number of times 'the' appears in the text and set the second position to that count. I'll show you examples as we build the input data from the reviews data. Check out Andrew's notebook and video for more about this. Read the dataUse the pandas library to read the reviews and postive/negative labels from comma-separated files. The data we're using has already been preprocessed a bit and we know it uses only lower case characters. If we were working from raw data, where we didn't know it was all lower case, we would want to add a step here to convert it. That's so we treat different variations of the same word, like `The`, `the`, and `THE`, all the same way.
###Code
reviews = pd.read_csv('reviews.txt', header=None)
labels = pd.read_csv('labels.txt', header=None)
###Output
_____no_output_____
###Markdown
Counting word frequencyTo start off we'll need to count how often each word appears in the data. We'll use this count to create a vocabulary we'll use to encode the review data. This resulting count is known as a [bag of words](https://en.wikipedia.org/wiki/Bag-of-words_model). We'll use it to select our vocabulary and build the word vectors. You should have seen how to do this in Andrew's lesson. Try to implement it here using the [Counter class](https://docs.python.org/2/library/collections.htmlcollections.Counter).> **Exercise:** Create the bag of words from the reviews data and assign it to `total_counts`. The reviews are stores in the `reviews` [Pandas DataFrame](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.html). If you want the reviews as a Numpy array, use `reviews.values`. You can iterate through the rows in the DataFrame with `for idx, row in reviews.iterrows():` ([documentation](http://pandas.pydata.org/pandas-docs/stable/generated/pandas.DataFrame.iterrows.html)). When you break up the reviews into words, use `.split(' ')` instead of `.split()` so your results match ours.
###Code
from collections import Counter
#print(reviews[0][0])
total_counts=Counter()
for review in reviews[0]:
for word in review.lower().split(' '):
total_counts[word]+=1
#total_counts = # bag of words here
print("Total words in data set: ", len(total_counts))
###Output
Total words in data set: 74074
###Markdown
Let's keep the first 10000 most frequent words. As Andrew noted, most of the words in the vocabulary are rarely used so they will have little effect on our predictions. Below, we'll sort `vocab` by the count value and keep the 10000 most frequent words.
###Code
vocab = sorted(total_counts, key=total_counts.get, reverse=True)[:10000]
print(vocab[:60])
###Output
['', 'the', '.', 'and', 'a', 'of', 'to', 'is', 'br', 'it', 'in', 'i', 'this', 'that', 's', 'was', 'as', 'for', 'with', 'movie', 'but', 'film', 'you', 'on', 't', 'not', 'he', 'are', 'his', 'have', 'be', 'one', 'all', 'at', 'they', 'by', 'an', 'who', 'so', 'from', 'like', 'there', 'her', 'or', 'just', 'about', 'out', 'if', 'has', 'what', 'some', 'good', 'can', 'more', 'she', 'when', 'very', 'up', 'time', 'no']
###Markdown
What's the last word in our vocabulary? We can use this to judge if 10000 is too few. If the last word is pretty common, we probably need to keep more words.
###Code
print(vocab[-1], ': ', total_counts[vocab[-1]])
###Output
fulfilled : 30
###Markdown
The last word in our vocabulary shows up in 30 reviews out of 25000. I think it's fair to say this is a tiny proportion of reviews. We are probably fine with this number of words.**Note:** When you run, you may see a different word from the one shown above, but it will also have the value `30`. That's because there are many words tied for that number of counts, and the `Counter` class does not guarantee which one will be returned in the case of a tie.Now for each review in the data, we'll make a word vector. First we need to make a mapping of word to index, pretty easy to do with a dictionary comprehension.> **Exercise:** Create a dictionary called `word2idx` that maps each word in the vocabulary to an index. The first word in `vocab` has index `0`, the second word has index `1`, and so on.
###Code
## create the word-to-index dictionary here
word2idx = {}
for i,word in enumerate(vocab):
word2idx[word] = i
word2idx
###Output
_____no_output_____
###Markdown
Text to vector functionNow we can write a function that converts a some text to a word vector. The function will take a string of words as input and return a vector with the words counted up. Here's the general algorithm to do this:* Initialize the word vector with [np.zeros](https://docs.scipy.org/doc/numpy/reference/generated/numpy.zeros.html), it should be the length of the vocabulary.* Split the input string of text into a list of words with `.split(' ')`. Again, if you call `.split()` instead, you'll get slightly different results than what we show here.* For each word in that list, increment the element in the index associated with that word, which you get from `word2idx`.**Note:** Since all words aren't in the `vocab` dictionary, you'll get a key error if you run into one of those words. You can use the `.get` method of the `word2idx` dictionary to specify a default returned value when you make a key error. For example, `word2idx.get(word, None)` returns `None` if `word` doesn't exist in the dictionary.
###Code
word_vector = np.zeros(len(vocab))
word_vector.size
def text_to_vector(text):
word_vector = np.zeros(len(vocab),dtype=np.int_)
for word in text.split(' '):
idx=word2idx.get(word,None)
if idx is None:
continue
else:
word_vector[idx]+=1
return np.array(word_vector)
###Output
_____no_output_____
###Markdown
If you do this right, the following code should return```text_to_vector('The tea is for a party to celebrate ' 'the movie so she has no time for a cake')[:65] array([0, 1, 0, 0, 2, 0, 1, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 2, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 0, 0, 0, 0, 1, 0, 0, 0, 0, 0, 1, 0, 0, 0, 1, 1, 0, 0, 0, 0, 0])```
###Code
text_to_vector('The tea is for a party to celebrate '
'the movie so she has no time for a cake')[:65]
###Output
_____no_output_____
###Markdown
Now, run through our entire review data set and convert each review to a word vector.
###Code
word_vectors = np.zeros((len(reviews), len(vocab)), dtype=np.int_)
for ii, (_, text) in enumerate(reviews.iterrows()):
word_vectors[ii] = text_to_vector(text[0])
# Printing out the first 5 word vectors
word_vectors[:5, :23]
###Output
_____no_output_____
###Markdown
Train, Validation, Test setsNow that we have the word_vectors, we're ready to split our data into train, validation, and test sets. Remember that we train on the train data, use the validation data to set the hyperparameters, and at the very end measure the network performance on the test data. Here we're using the function `to_categorical` from TFLearn to reshape the target data so that we'll have two output units and can classify with a softmax activation function. We actually won't be creating the validation set here, TFLearn will do that for us later.
###Code
Y = (labels=='positive').astype(np.int_)
records = len(labels)
shuffle = np.arange(records)
np.random.shuffle(shuffle)
test_fraction = 0.9
train_split, test_split = shuffle[:int(records*test_fraction)], shuffle[int(records*test_fraction):]
trainX, trainY = word_vectors[train_split,:], to_categorical(Y.values[train_split], 2)
testX, testY = word_vectors[test_split,:], to_categorical(Y.values[test_split], 2)
trainY
###Output
_____no_output_____
###Markdown
Building the network[TFLearn](http://tflearn.org/) lets you build the network by [defining the layers](http://tflearn.org/layers/core/). Input layerFor the input layer, you just need to tell it how many units you have. For example, ```net = tflearn.input_data([None, 100])```would create a network with 100 input units. The first element in the list, `None` in this case, sets the batch size. Setting it to `None` here leaves it at the default batch size.The number of inputs to your network needs to match the size of your data. For this example, we're using 10000 element long vectors to encode our input data, so we need 10000 input units. Adding layersTo add new hidden layers, you use ```net = tflearn.fully_connected(net, n_units, activation='ReLU')```This adds a fully connected layer where every unit in the previous layer is connected to every unit in this layer. The first argument `net` is the network you created in the `tflearn.input_data` call. It's telling the network to use the output of the previous layer as the input to this layer. You can set the number of units in the layer with `n_units`, and set the activation function with the `activation` keyword. You can keep adding layers to your network by repeated calling `net = tflearn.fully_connected(net, n_units)`. Output layerThe last layer you add is used as the output layer. Therefore, you need to set the number of units to match the target data. In this case we are predicting two classes, positive or negative sentiment. You also need to set the activation function so it's appropriate for your model. Again, we're trying to predict if some input data belongs to one of two classes, so we should use softmax.```net = tflearn.fully_connected(net, 2, activation='softmax')``` TrainingTo set how you train the network, use ```net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')```Again, this is passing in the network you've been building. The keywords: * `optimizer` sets the training method, here stochastic gradient descent* `learning_rate` is the learning rate* `loss` determines how the network error is calculated. In this example, with the categorical cross-entropy.Finally you put all this together to create the model with `tflearn.DNN(net)`. So it ends up looking something like ```net = tflearn.input_data([None, 10]) Inputnet = tflearn.fully_connected(net, 5, activation='ReLU') Hiddennet = tflearn.fully_connected(net, 2, activation='softmax') Outputnet = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')model = tflearn.DNN(net)```> **Exercise:** Below in the `build_model()` function, you'll put together the network using TFLearn. You get to choose how many layers to use, how many hidden units, etc.
###Code
# Network building
def build_model():
# This resets all parameters and variables, leave this here
tf.reset_default_graph()
#### Your code ####
net = tflearn.input_data([None, 10000]) # Input
net = tflearn.fully_connected(net, 200, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 25, activation='ReLU') # Hidden
net = tflearn.fully_connected(net, 2, activation='softmax') # Output
net = tflearn.regression(net, optimizer='sgd', learning_rate=0.1, loss='categorical_crossentropy')
model = tflearn.DNN(net)
return model
###Output
_____no_output_____
###Markdown
Intializing the modelNext we need to call the `build_model()` function to actually build the model. In my solution I haven't included any arguments to the function, but you can add arguments so you can change parameters in the model if you want.> **Note:** You might get a bunch of warnings here. TFLearn uses a lot of deprecated code in TensorFlow. Hopefully it gets updated to the new TensorFlow version soon.
###Code
model = build_model()
###Output
WARNING:tensorflow:From D:\Anaconda3\envs\dlnd\lib\site-packages\tflearn\objectives.py:66: calling reduce_sum (from tensorflow.python.ops.math_ops) with keep_dims is deprecated and will be removed in a future version.
Instructions for updating:
keep_dims is deprecated, use keepdims instead
###Markdown
Training the networkNow that we've constructed the network, saved as the variable `model`, we can fit it to the data. Here we use the `model.fit` method. You pass in the training features `trainX` and the training targets `trainY`. Below I set `validation_set=0.1` which reserves 10% of the data set as the validation set. You can also set the batch size and number of epochs with the `batch_size` and `n_epoch` keywords, respectively. Below is the code to fit our the network to our word vectors.You can rerun `model.fit` to train the network further if you think you can increase the validation accuracy. Remember, all hyperparameter adjustments must be done using the validation set. **Only use the test set after you're completely done training the network.**
###Code
# Training
model.fit(trainX, trainY, validation_set=0.1, show_metric=True, batch_size=128, n_epoch=10)
###Output
Training Step: 3179 | total loss: [1m[32m1.38630[0m[0m | time: 11.943s
| SGD | epoch: 020 | loss: 1.38630 - acc: 0.4761 -- iter: 20224/20250
Training Step: 3180 | total loss: [1m[32m1.38630[0m[0m | time: 13.017s
| SGD | epoch: 020 | loss: 1.38630 - acc: 0.4770 | val_loss: 1.38630 - val_acc: 0.4929 -- iter: 20250/20250
--
###Markdown
TestingAfter you're satisified with your hyperparameters, you can run the network on the test set to measure its performance. Remember, *only do this after finalizing the hyperparameters*.
###Code
predictions = (np.array(model.predict(testX))[:,0] >= 0.5).astype(np.int_)
test_accuracy = np.mean(predictions == testY[:,0], axis=0)
print("Test accuracy: ", test_accuracy)
###Output
Test accuracy: 0.4816
###Markdown
Try out your own text!
###Code
# Helper function that uses your model to predict sentiment
def test_sentence(sentence):
positive_prob = model.predict([text_to_vector(sentence.lower())])[0][1]
print('Sentence: {}'.format(sentence))
print('P(positive) = {:.3f} :'.format(positive_prob),
'Positive' if positive_prob > 0.5 else 'Negative')
sentence = "Moonlight is by far the best movie of 2016."
test_sentence(sentence)
sentence = "It's amazing anyone could be talented enough to make something this spectacularly awful"
test_sentence(sentence)
###Output
Sentence: Moonlight is by far the best movie of 2016.
P(positive) = 0.500 : Positive
Sentence: It's amazing anyone could be talented enough to make something this spectacularly awful
P(positive) = 0.500 : Positive
|
cnn_mnist_gluon_simplified.ipynb | ###Markdown
CNN Interface for Gluon simplified in a similar manner to symbolic interfaceTransition from symbolic MXNet to Gluon simplified
###Code
import mxnet as mx
from mxnet import nd, autograd
from mxnet import gluon
import numpy as np
mx.random.seed(1)
class BaseCNNClassifier(mx.gluon.Block):
def __init__(self, ctx):
super(BaseCNNClassifier, self).__init__()
self.ctx = ctx
self.net = None
#@override
def build_model(self, convs, num_fc, num_classes):
'''
Default activation is relu
'''
# convs = [(channel, kernel_sz, pool_siz)triplets *N]
cnn_layers = gluon.nn.HybridSequential(prefix='')
for ch, k_sz, p_sz in convs:
cnn_layers.add(gluon.nn.Conv2D(channels=ch, kernel_size=k_sz, activation='relu'))
cnn_layers.add(gluon.nn.MaxPool2D(pool_size=p_sz, strides=2)) # strides fixed for now
net = gluon.nn.HybridSequential()
with net.name_scope():
net.add(cnn_layers)
# Flatten and apply fully connected layers
net.add(gluon.nn.Flatten())
net.add(gluon.nn.Dense(num_fc, activation="relu"))
net.add(gluon.nn.Dense(num_classes))
# speed up execution with hybridization
net.hybridize()
self.net = net
def forward(self):
pass
def compile_model(self, loss=None, optimizer='sgd', lr=1E-3, init_mg=2.24):
print self.net
self.net.collect_params().initialize(mx.init.Xavier(magnitude=init_mg), ctx=self.ctx)
self.loss = gluon.loss.SoftmaxCrossEntropyLoss() if loss is None else loss
self.optimizer = mx.gluon.Trainer(self.net.collect_params(),
optimizer, {'learning_rate': lr})
def evaluate_accuracy(self, data_iterator):
acc = mx.metric.Accuracy()
for i, (data, label) in enumerate(data_iterator):
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
output = self.net(data)
predictions = nd.argmax(output, axis=1)
acc.update(preds=predictions, labels=label)
return acc.get()[1]
def fit(self, train_data, test_data, epochs):
smoothing_constant = .01
ctx = self.ctx
for e in range(epochs):
for i, (data, label) in enumerate(train_data):
data = data.as_in_context(ctx)
label = label.as_in_context(ctx)
#print data.shape, label.shape
with autograd.record(train_mode=True):
output = self.net(data)
loss = self.loss(output, label)
loss.backward()
self.optimizer.step(data.shape[0])
##########################
# Keep a moving average of the losses
##########################
curr_loss = nd.mean(loss).asscalar()
moving_loss = (curr_loss if ((i == 0) and (e == 0))
else (1 - smoothing_constant) * moving_loss + (smoothing_constant) * curr_loss)
test_accuracy = self.evaluate_accuracy(test_data)
train_accuracy = self.evaluate_accuracy(train_data)
print("Epoch %s. Loss: %s, Train_acc %s, Test_acc %s" % (e, moving_loss, train_accuracy, test_accuracy))
batch_size = 64
num_inputs = 784
num_outputs = 10
def transform(data, label):
return nd.transpose(data.astype(np.float32), (2,0,1))/255, label.astype(np.float32)
train_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=True, transform=transform),
batch_size, shuffle=True)
test_data = mx.gluon.data.DataLoader(mx.gluon.data.vision.MNIST(train=False, transform=transform),
batch_size, shuffle=False)
train_data.__dict__
num_fc = 512
num_classes = 10 #num_outputs
convs = [(20,5,2), (50,5,2)]
ctx = mx.cpu() #mx.gpu()
cnn = BaseCNNClassifier(ctx)
cnn.build_model(convs, num_fc, num_classes)
cnn.compile_model(optimizer='adam')
cnn.fit(train_data, test_data, epochs=10)
###Output
_____no_output_____
###Markdown
Lets try CIFAR now
###Code
batch_size = 32
def transformer(data, label):
data = mx.image.imresize(data, 224, 224)
data = mx.nd.transpose(data, (2,0,1))
data = data.astype(np.float32)
return data, label
train_data = gluon.data.DataLoader(
gluon.data.vision.CIFAR10('./data', train=True, transform=transformer),
batch_size=batch_size, shuffle=True, last_batch='discard')
test_data = gluon.data.DataLoader(
gluon.data.vision.CIFAR10('./data', train=False, transform=transformer),
batch_size=batch_size, shuffle=False, last_batch='discard')
num_fc = 512
num_classes = 10 #num_outputs
convs = [(50,3,2), (50,3,2), (100,3,2), (100,3,2)]
ctx = mx.gpu()
cnn = BaseCNNClassifier(ctx)
cnn.build_model(convs, num_fc, num_classes)
cnn.compile_model(optimizer='adam')
cnn.fit(train_data, test_data, epochs=5)
###Output
HybridSequential(
(0): HybridSequential(
(0): Conv2D(50, kernel_size=(3, 3), stride=(1, 1))
(1): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False)
(2): Conv2D(50, kernel_size=(3, 3), stride=(1, 1))
(3): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False)
(4): Conv2D(100, kernel_size=(3, 3), stride=(1, 1))
(5): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False)
(6): Conv2D(100, kernel_size=(3, 3), stride=(1, 1))
(7): MaxPool2D(size=(2, 2), stride=(2, 2), padding=(0, 0), ceil_mode=False)
)
(1): Flatten
(2): Dense(512, Activation(relu))
(3): Dense(10, linear)
)
Epoch 0. Loss: 1.47114814304, Train_acc 0.511003521127, Test_acc 0.474959935897
Epoch 1. Loss: 1.23667258255, Train_acc 0.602792893726, Test_acc 0.546875
Epoch 2. Loss: 1.04258822086, Train_acc 0.72685259283, Test_acc 0.608173076923
Epoch 3. Loss: 0.844378689356, Train_acc 0.828885243278, Test_acc 0.62359775641
Epoch 4. Loss: 0.628420212727, Train_acc 0.881722151088, Test_acc 0.612279647436
|
02-Probability & Distributions/04-Emp_Disc_Dist.ipynb | ###Markdown
Empirical Discrete Distributions The empirical distribution describes a sample of observations of a given variable, in this case, a discrete variable. At a given point, its value is the proportion of sample observations less than or equal to that point.
###Code
#!pip install numpy
#!pip install matplotlib
#!pip install seaborn
#!pip install scipy
import numpy as np
import matplotlib.pyplot as plt
import seaborn as sns;
sns.set_style("whitegrid")
###Output
_____no_output_____
###Markdown
Calculating the Empirical Distribution Function of an arbitrary Discrete Distribution:
###Code
from scipy.stats import rv_discrete
###Output
_____no_output_____
###Markdown
Working with an unfair die
###Code
dk = np.arange(1,7)
dk
###Output
_____no_output_____
###Markdown
The probability of `5` is `1/2`. The other values have the same probability `1/10`.
###Code
pk = (1/10, 1/10, 1/10, 1/10, 1/2, 1/10)
pk
# Verifying that the sum of their probabilities is one
print(np.array(pk).sum())
unfair_die = rv_discrete(values=(dk,pk))
# Plotting
plt.plot(dk, unfair_die.pmf(dk),'go', ms=10)
plt.vlines(dk, 0, unfair_die.pmf(dk), colors='g', lw=2)
plt.title('Original Probabilities');
###Output
_____no_output_____
###Markdown
Let's generate 10 values of our unfair die:
###Code
# Generating 10 random values of our unfair die
gen_10_values = unfair_die.rvs(size=10)
gen_10_values
type(gen_10_values)
elem, freq = np.unique(gen_10_values, return_counts=True)
freq = freq/len(gen_10_values)
print(dict(zip(elem, freq)))
###Output
{1: 0.3, 3: 0.1, 4: 0.2, 5: 0.3, 6: 0.1}
###Markdown
Let's graph the results.- The green points represent the actual probabilities.- The orange bars represent the probabilities calculated from the generated values.
###Code
# Plotting
plt.plot(dk, unfair_die.pmf(dk),'go', ms=10)
plt.vlines(dk, 0, unfair_die.pmf(dk), colors='g', lw=2)
plt.bar(elem, freq, color='peachpuff')
plt.title('Expected and Observed Probabilities');
###Output
_____no_output_____
###Markdown
Let's create a function with the previous code:
###Code
def generate_unfair_die(n=10):
gen_values = unfair_die.rvs(size=n)
elem, freq = np.unique(gen_values, return_counts=True)
freq = freq/n
print(dict(zip(elem, freq)))
plt.plot(dk, unfair_die.pmf(dk),'go', ms=10)
plt.vlines(dk, 0, unfair_die.pmf(dk), colors='g', lw=2)
plt.bar(elem, freq, color='peachpuff')
plt.title('Expected and Observed Probabilities');
generate_unfair_die(10)
###Output
{1: 0.1, 2: 0.1, 3: 0.1, 5: 0.5, 6: 0.2}
###Markdown
Notice that the previous two results are not the same because we generate different sets of 10 values for each. Notice also that observed and expected frequencies are not so close.Generating 100 values of our unfair die
###Code
generate_unfair_die(100)
###Output
{1: 0.1, 2: 0.09, 3: 0.13, 4: 0.11, 5: 0.41, 6: 0.16}
###Markdown
Now the real probabilities (green dots) and the observed ones (orange bars) are much closer.Generating 1000 values of our unfair die
###Code
generate_unfair_die(1000)
###Output
{1: 0.095, 2: 0.091, 3: 0.105, 4: 0.09, 5: 0.514, 6: 0.105}
###Markdown
Again the real probabilities (green dots) and the observed ones (orange bars) are much closer.Generating 10000 values of our unfair die
###Code
generate_unfair_die(10000)
###Output
{1: 0.0964, 2: 0.1067, 3: 0.1021, 4: 0.1023, 5: 0.4934, 6: 0.0991}
###Markdown
The more random numbers of our unfair die we generate, the closer the observed and expected probabilities will be. Using a sequence of random values Let's create a discrete pdf using a sequence of random values.
###Code
# Generating 500 uniform values from 1 to 6 and calculating their frequencies
seq1 = np.random.randint(1, 7, size=500)
elem1, freq1 = np.unique(seq1, return_counts=True)
freq1 = freq1/len(seq1)
print(dict(zip(elem1, freq1)))
###Output
{1: 0.158, 2: 0.138, 3: 0.148, 4: 0.206, 5: 0.202, 6: 0.148}
###Markdown
Generating Poisson values
###Code
from scipy.stats import poisson
# Generating 500 Poisson values and calculating their frequencies
seq2 = poisson.rvs(mu=1, size=500)
elem2, freq2 = np.unique(seq2, return_counts=True)
freq2 = freq2/len(seq2)
print(dict(zip(elem2, freq2)))
###Output
{0: 0.376, 1: 0.35, 2: 0.202, 3: 0.054, 4: 0.014, 5: 0.002, 6: 0.002}
###Markdown
Combining the two sequences
###Code
original_values = np.concatenate([seq1, seq2])
# Calculating the frequencies of the combining sample
xk, fk = np.unique(original_values, return_counts=True)
fk = fk/len(original_values)
print(dict(zip(xk, fk)))
###Output
{0: 0.188, 1: 0.254, 2: 0.17, 3: 0.101, 4: 0.11, 5: 0.102, 6: 0.075}
###Markdown
Creating a discrete distribution function
###Code
new_disc_f = rv_discrete(values=(xk, fk))
# Plotting our discrete distribution
plt.plot(xk, new_disc_f.pmf(xk),'go', ms=10)
plt.vlines(xk, 0, new_disc_f.pmf(xk), colors='g', lw=2)
plt.title('Original Probabilities');
###Output
_____no_output_____
###Markdown
Suppose this is our distribution, and we are interested in generating random numbers of that distribution.Generating 10 new values of our new distribution.
###Code
new_10_values = new_disc_f.rvs(size=10)
new_10_values
###Output
_____no_output_____
###Markdown
Creating a function:
###Code
def generate_discrete_dist(disc_f, n=10):
gen_values = disc_f.rvs(size=n)
elem, freq = np.unique(gen_values, return_counts=True)
freq = freq/n
print(dict(zip(elem, freq)))
plt.plot(xk, disc_f.pmf(xk),'go', ms=10)
plt.vlines(xk, 0, disc_f.pmf(xk), colors='g', lw=2)
plt.bar(elem, freq, color='lightgray')
plt.title('Expected and Observed Probabilities');
# The default value is n=10
generate_discrete_dist(new_disc_f)
###Output
{0: 0.1, 1: 0.3, 2: 0.2, 3: 0.1, 4: 0.3}
###Markdown
10 values are not enought. The expected (green dots) and observed (grey bars) probabilities are far away each others.Generating 100 values
###Code
generate_discrete_dist(new_disc_f, 100)
###Output
{0: 0.21, 1: 0.24, 2: 0.15, 3: 0.09, 4: 0.14, 5: 0.08, 6: 0.09}
###Markdown
The results are better, but we can improve them.Generating 1000 values
###Code
generate_discrete_dist(new_disc_f, 1000)
###Output
{0: 0.184, 1: 0.261, 2: 0.165, 3: 0.093, 4: 0.109, 5: 0.11, 6: 0.078}
###Markdown
Generating 10000 values
###Code
generate_discrete_dist(new_disc_f, 10000)
###Output
{0: 0.196, 1: 0.2473, 2: 0.1732, 3: 0.0976, 4: 0.1122, 5: 0.0978, 6: 0.0759}
###Markdown
Now, the results are much better. The observed and expected probabilities are close enough.
###Code
generate_discrete_dist(new_disc_f, 100000)
###Output
{0: 0.19077, 1: 0.25551, 2: 0.16871, 3: 0.09931, 4: 0.10914, 5: 0.10191, 6: 0.07465}
|
python3/notebooks/pandas-initialization/dataframe-initialization.ipynb | ###Markdown
create from lists
###Code
names = ['john','mary','peter','gary','anne']
ages = [33,22,45,23,12]
df = pd.DataFrame({
'names':names,
'ages':ages
})
df
###Output
_____no_output_____
###Markdown
create from list of dicts
###Code
data_dicts = [
{'name':"john","gender":'male','age':45},
{'name':"mary", 'gender':"female",'age':19},
{'name':"peter",'gender':'male', 'age':34}
]
# must reassign since the append method does not work in place
df = pd.DataFrame.from_records(data_dicts)
df
###Output
_____no_output_____
###Markdown
create from dict use keys as index create an empty dataframe and append rows
###Code
df = pd.DataFrame()
# must reassign since the append method does not work in place
df = df.append({'col_a':5,'col_b':10}, ignore_index=True)
df = df.append({'col_a':1,'col_b':100}, ignore_index=True)
df = df.append({'col_a':32,'col_b':999}, ignore_index=True)
df
###Output
_____no_output_____
###Markdown
crate dataframe with specific types
###Code
data_dicts = [
{'name':"john","gender":'male','age':45},
{'name':"mary", 'gender':"female",'age':19},
{'name':"peter",'gender':'male', 'age':34}
]
# must reassign since the append method does not work in place
df = pd.DataFrame.from_records(data_dicts,)
df
###Output
_____no_output_____
###Markdown
create from lists
###Code
names = ['john','mary','peter','gary','anne']
ages = [33,22,45,23,12]
df = pd.DataFrame({
'names':names,
'ages':ages
})
df
###Output
_____no_output_____
###Markdown
create from list of dicts
###Code
data_dicts = [
{'name':"john","gender":'male','age':45},
{'name':"mary", 'gender':"female",'age':19},
{'name':"peter",'gender':'male', 'age':34}
]
# must reassign since the append method does not work in place
df = pd.DataFrame.from_records(data_dicts)
df
###Output
_____no_output_____
###Markdown
create an empty dataframe and append rows
###Code
df = pd.DataFrame()
# must reassign since the append method does not work in place
df = df.append({'col_a':5,'col_b':10}, ignore_index=True)
df = df.append({'col_a':1,'col_b':100}, ignore_index=True)
df = df.append({'col_a':32,'col_b':999}, ignore_index=True)
df
###Output
_____no_output_____
###Markdown
crate dataframe with specific types
###Code
data_dicts = [
{'name':"john","gender":'male','age':45},
{'name':"mary", 'gender':"female",'age':19},
{'name':"peter",'gender':'male', 'age':34}
]
# must reassign since the append method does not work in place
df = pd.DataFrame.from_records(data_dicts,)
df
###Output
_____no_output_____ |
notebooks/lecture/02-pandas/lab_03_reading_and_writing_data_answers.ipynb | ###Markdown
Lab 3 - Reading and writing data
###Code
import pandas as pd
import numpy as np
from display_functions import head
###Output
_____no_output_____
###Markdown
Exercise goals:- practice reading in and writing out several different types of data- get more familiar with extracting data from online sources --- Exercise 1: Collecting Dutch films from WikipediaIn this exercise we will use pandas to collect all Dutch films from 2010 till now. The resource that we are going to use is the following Dutch Wikipedia page:https://nl.wikipedia.org/wiki/Lijst\_van\_Nederlandse\_films\_(2010-2019)Click on the link and have a look around on the page. How many tables do we need to collect? **Answer**: we need to collect 9 tables (2010-now) We are going to use `pd.read_html` for retrieving the data. Fill out the code below to run it right out of the box:
###Code
url = 'https://nl.wikipedia.org/wiki/Lijst_van_Nederlandse_films_(2010-2019)' # ANSWER
dfs = pd.read_html(url)
###Output
_____no_output_____
###Markdown
What did the `pd.read_html` return? Hoe many 'tables' did it collect?
###Code
print(type(dfs)) # ANSWER
print(len(dfs)) # ANSWER
###Output
_____no_output_____
###Markdown
Use the function below to try to figure out what tables were actually collected:
###Code
from IPython.display import display
from IPython.display import HTML
def display_tables(dfs, max_rows=3):
"""
Display first `max_rows` of the DataFrames is dfs.
Parameters
----------
dfs : list with DataFrames
List obtained by downloading tables from Wikipedia.
max_rows : integer
Number of rows to display
"""
for i, df in enumerate(dfs):
print('\nTable {}\n-------'.format(i))
first_rows = df.head(max_rows)
display(HTML(first_rows.to_html()))
print(' ...\n')
###Output
_____no_output_____
###Markdown
Use the parameter `match` of `pd.read_html` to make sure only the correct tables are collected (those with the string `Titel`):
###Code
dfs = pd.read_html(url, match='Titel') # ANSWER
display_tables(dfs)
###Output
_____no_output_____
###Markdown
Looking at the created DataFrame we find that the heading is incorrectly added as a row, fix that with the `header` parameter:
###Code
dfs = pd.read_html(url, match='Titel', header=0) # ANSWER
display_tables(dfs)
###Output
_____no_output_____
###Markdown
Our Dutch films per year are now in a list of DataFrames. Instead of storing the DataFrames in a list, we want to store them in a `dict` with years as keys and DataFrames as values.Fill out the code below such that the DataFrames end up in the dict `films` with the year as key.First generate a list `years` with the years of our films.
###Code
years = pd.date_range(start='2010', end='2018', freq='AS').year.tolist() # ANSWER
films = dict(zip(years, dfs)) # ANSWER
# or alternatively
films = {date: df for date, df in zip(years, dfs)} # ANSWER
###Output
_____no_output_____
###Markdown
Exercise 2: Writing the Dutch films to Excel Someone asks you if they can get an Excel file containing the Dutch films from 2010 till now. They preferably want each year of films on a different sheet. Create the requested a 'dutch_films.xlsx' file using the `ExcelWriter`.
###Code
#ANSWER
filepath = 'data/dutch_films.xlsx'
with pd.ExcelWriter(filepath) as writer:
for year, df in sorted(films.items()): # note that dict are by default not key sorted
df.to_excel(writer, sheet_name=str(year), index=False)
###Output
_____no_output_____
###Markdown
Exercise 3: Reading in some cyclist dataWe will try to read in some bike data that lists how many people were on 7 different bike paths in Montreal, each day.Use the `head` shell command (or the `head()` function if you're on Windows) to peek at the first 3 lines of the data using the command line:
###Code
# This might fail if you're on Windows
!head -n 3 data/bikes.csv
head('data/bikes.csv', 3)
###Output
_____no_output_____
###Markdown
Now it is up to you to correctly read in the file, make sure that:- all fields are read (use the right separator with `sep`);- the correct `encoding` is used (`'latin1'`);- the Date column is parsed as a date (not only parse the dates, also set `dayfirst=True`);- set `Date` as the `index_col`.
###Code
df = pd.read_csv('data/bikes.csv', sep=';', encoding='latin1',
parse_dates=['Date'], dayfirst=True, index_col='Date') #ANSWER
df.head()
###Output
_____no_output_____ |
Subsets and Splits
No community queries yet
The top public SQL queries from the community will appear here once available.